• Title/Summary/Keyword: neural network ensemble

Search Result 128, Processing Time 0.024 seconds

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

An ensemble learning based Bayesian model updating approach for structural damage identification

  • Guangwei Lin;Yi Zhang;Enjian Cai;Taisen Zhao;Zhaoyan Li
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.61-81
    • /
    • 2023
  • This study presents an ensemble learning based Bayesian model updating approach for structural damage diagnosis. In the developed framework, the structure is initially decomposed into a set of substructures. The autoregressive moving average (ARMAX) model is established first for structural damage localization based structural motion equation. The wavelet packet decomposition is utilized to extract the damage-sensitive node energy in different frequency bands for constructing structural surrogate models. Four methods, including Kriging predictor (KRG), radial basis function neural network (RBFNN), support vector regression (SVR), and multivariate adaptive regression splines (MARS), are selected as candidate structural surrogate models. These models are then resampled by bootstrapping and combined to obtain an ensemble model by probabilistic ensemble. Meanwhile, the maximum entropy principal is adopted to search for new design points for sample space updating, yielding a more robust ensemble model. Through the iterations, a framework of surrogate ensemble learning based model updating with high model construction efficiency and accuracy is proposed. The specificities of the method are discussed and investigated in a case study.

Improvement of Vocal Detection Accuracy Using Convolutional Neural Networks

  • You, Shingchern D.;Liu, Chien-Hung;Lin, Jia-Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.729-748
    • /
    • 2021
  • Vocal detection is one of the fundamental steps in musical information retrieval. Typically, the detection process consists of feature extraction and classification steps. Recently, neural networks are shown to outperform traditional classifiers. In this paper, we report our study on how to improve detection accuracy further by carefully choosing the parameters of the deep network model. Through experiments, we conclude that a feature-classifier model is still better than an end-to-end model. The recommended model uses a spectrogram as the input plane and the classifier is an 18-layer convolutional neural network (CNN). With this arrangement, when compared with existing literature, the proposed model improves the accuracy from 91.8% to 94.1% in Jamendo dataset. As the dataset has an accuracy of more than 90%, the improvement of 2.3% is difficult and valuable. If even higher accuracy is required, the ensemble learning may be used. The recommend setting is a majority vote with seven proposed models. Doing so, the accuracy increases by about 1.1% in Jamendo dataset.

A Study on the Deep Neural Network based Recognition Model for Space Debris Vision Tracking System (심층신경망 기반 우주파편 영상 추적시스템 인식모델에 대한 연구)

  • Lim, Seongmin;Kim, Jin-Hyung;Choi, Won-Sub;Kim, Hae-Dong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.9
    • /
    • pp.794-806
    • /
    • 2017
  • It is essential to protect the national space assets and space environment safely as a space development country from the continuously increasing space debris. And Active Debris Removal(ADR) is the most active way to solve this problem. In this paper, we studied the Artificial Neural Network(ANN) for a stable recognition model of vision-based space debris tracking system. We obtained the simulated image of the space environment by the KARICAT which is the ground-based space debris clearing satellite testbed developed by the Korea Aerospace Research Institute, and created the vector which encodes structure and color-based features of each object after image segmentation by depth discontinuity. The Feature Vector consists of 3D surface area, principle vector of point cloud, 2D shape and color information. We designed artificial neural network model based on the separated Feature Vector. In order to improve the performance of the artificial neural network, the model is divided according to the categories of the input feature vectors, and the ensemble technique is applied to each model. As a result, we confirmed the performance improvement of recognition model by ensemble technique.

An Ensemble Approach for Cyber Bullying Text messages and Images

  • Zarapala Sunitha Bai;Sreelatha Malempati
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.59-66
    • /
    • 2023
  • Text mining (TM) is most widely used to find patterns from various text documents. Cyber-bullying is the term that is used to abuse a person online or offline platform. Nowadays cyber-bullying becomes more dangerous to people who are using social networking sites (SNS). Cyber-bullying is of many types such as text messaging, morphed images, morphed videos, etc. It is a very difficult task to prevent this type of abuse of the person in online SNS. Finding accurate text mining patterns gives better results in detecting cyber-bullying on any platform. Cyber-bullying is developed with the online SNS to send defamatory statements or orally bully other persons or by using the online platform to abuse in front of SNS users. Deep Learning (DL) is one of the significant domains which are used to extract and learn the quality features dynamically from the low-level text inclusions. In this scenario, Convolutional neural networks (CNN) are used for training the text data, images, and videos. CNN is a very powerful approach to training on these types of data and achieved better text classification. In this paper, an Ensemble model is introduced with the integration of Term Frequency (TF)-Inverse document frequency (IDF) and Deep Neural Network (DNN) with advanced feature-extracting techniques to classify the bullying text, images, and videos. The proposed approach also focused on reducing the training time and memory usage which helps the classification improvement.

Development of Machine Learning Ensemble Model using Artificial Intelligence (인공지능을 활용한 기계학습 앙상블 모델 개발)

  • Lee, K.W.;Won, Y.J.;Song, Y.B.;Cho, K.S.
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.34 no.5
    • /
    • pp.211-217
    • /
    • 2021
  • To predict mechanical properties of secondary hardening martensitic steels, a machine learning ensemble model was established. Based on ANN(Artificial Neural Network) architecture, some kinds of methods was considered to optimize the model. In particular, interaction features, which can reflect interactions between chemical compositions and processing conditions of real alloy system, was considered by means of feature engineering, and then K-Fold cross validation coupled with bagging ensemble were investigated to reduce R2_score and a factor indicating average learning errors owing to biased experimental database.

A Study on Prediction of Attendance in Korean Baseball League Using Artificial Neural Network (인경신경망을 이용한 한국프로야구 관중 수요 예측에 관한 연구)

  • Park, Jinuk;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.565-572
    • /
    • 2017
  • Traditional method for time series analysis, autoregressive integrated moving average (ARIMA) allows to mine significant patterns from the past observations using autocorrelation and to forecast future sequences. However, Korean baseball games do not have regular intervals to analyze relationship among the past attendance observations. To address this issue, we propose artificial neural network (ANN) based attendance prediction model using various measures including performance, team characteristics and social influences. We optimized ANNs using grid search to construct optimal model for regression problem. The evaluation shows that the optimal and ensemble model outperform the baseline model, linear regression model.

Ensemble Design of Machine Learning Technigues: Experimental Verification by Prediction of Drifter Trajectory (앙상블을 이용한 기계학습 기법의 설계: 뜰개 이동경로 예측을 통한 실험적 검증)

  • Lee, Chan-Jae;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.57-67
    • /
    • 2018
  • The ensemble is a unified approach used for getting better performance by using multiple algorithms in machine learning. In this paper, we introduce boosting and bagging, which have been widely used in ensemble techniques, and design a method using support vector regression, radial basis function network, Gaussian process, and multilayer perceptron. In addition, our experiment was performed by adding a recurrent neural network and MOHID numerical model. The drifter data used for our experimental verification consist of 683 observations in seven regions. The performance of our ensemble technique is verified by comparison with four algorithms each. As verification, mean absolute error was adapted. The presented methods are based on ensemble models using bagging, boosting, and machine learning. The error rate was calculated by assigning the equal weight value and different weight value to each unit model in ensemble. The ensemble model using machine learning showed 61.7% improvement compared to the average of four machine learning technique.

Long-term runoff simulation using rainfall LSTM-MLP artificial neural network ensemble (LSTM - MLP 인공신경망 앙상블을 이용한 장기 강우유출모의)

  • An, Sungwook;Kang, Dongho;Sung, Janghyun;Kim, Byungsik
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.127-137
    • /
    • 2024
  • Physical models, which are often used for water resource management, are difficult to build and operate with input data and may involve the subjective views of users. In recent years, research using data-driven models such as machine learning has been actively conducted to compensate for these problems in the field of water resources, and in this study, an artificial neural network was used to simulate long-term rainfall runoff in the Osipcheon watershed in Samcheok-si, Gangwon-do. For this purpose, three input data groups (meteorological observations, daily precipitation and potential evapotranspiration, and daily precipitation - potential evapotranspiration) were constructed from meteorological data, and the results of training the LSTM (Long Short-term Memory) artificial neural network model were compared and analyzed. As a result, the performance of LSTM-Model 1 using only meteorological observations was the highest, and six LSTM-MLP ensemble models with MLP artificial neural networks were built to simulate long-term runoff in the Fifty Thousand Watershed. The comparison between the LSTM and LSTM-MLP models showed that both models had generally similar results, but the MAE, MSE, and RMSE of LSTM-MLP were reduced compared to LSTM, especially in the low-flow part. As the results of LSTM-MLP show an improvement in the low-flow part, it is judged that in the future, in addition to the LSTM-MLP model, various ensemble models such as CNN can be used to build physical models and create sulfur curves in large basins that take a long time to run and unmeasured basins that lack input data.

A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction (신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 -)

  • 이영찬;곽수환
    • Journal of Intelligence and Information Systems
    • /
    • v.5 no.1
    • /
    • pp.95-101
    • /
    • 1999
  • In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

  • PDF