• Title/Summary/Keyword: Machine Learning & Training

Search Result 789, Processing Time 0.032 seconds

An Improved Co-training Method without Feature Split (속성분할이 없는 향상된 협력학습 방법)

  • 이창환;이소민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1259-1265
    • /
    • 2004
  • In many applications, producing labeled data is costly and time consuming while an enormous amount of unlabeled data is available with little cost. Therefore, it is natural to ask whether we can take advantage of these unlabeled data in classification teaming. In machine learning literature, the co-training method has been widely used for this purpose. However, the current co-training method requires the entire features to be split into two independent sets. Therefore, in this paper, we improved the current co-training method in a number of ways, and proposed a new co-training method which do not need the feature split. Experimental results show that our proposed method can significantly improve the performance of the current co-training algorithm.

Study on Machine Learning Techniques for Malware Classification and Detection

  • Moon, Jaewoong;Kim, Subin;Song, Jaeseung;Kim, Kyungshin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4308-4325
    • /
    • 2021
  • The importance and necessity of artificial intelligence, particularly machine learning, has recently been emphasized. In fact, artificial intelligence, such as intelligent surveillance cameras and other security systems, is used to solve various problems or provide convenience, providing solutions to problems that humans traditionally had to manually deal with one at a time. Among them, information security is one of the domains where the use of artificial intelligence is especially needed because the frequency of occurrence and processing capacity of dangerous codes exceeds the capabilities of humans. Therefore, this study intends to examine the definition of artificial intelligence and machine learning, its execution method, process, learning algorithm, and cases of utilization in various domains, particularly the cases and contents of artificial intelligence technology used in the field of information security. Based on this, this study proposes a method to apply machine learning technology to the method of classifying and detecting malware that has rapidly increased in recent years. The proposed methodology converts software programs containing malicious codes into images and creates training data suitable for machine learning by preparing data and augmenting the dataset. The model trained using the images created in this manner is expected to be effective in classifying and detecting malware.

Synthetic Training Data Generation for Fault Detection Based on Deep Learning (딥러닝 기반 탄성파 단층 해석을 위한 합성 학습 자료 생성)

  • Choi, Woochang;Pyun, Sukjoon
    • Geophysics and Geophysical Exploration
    • /
    • v.24 no.3
    • /
    • pp.89-97
    • /
    • 2021
  • Fault detection in seismic data is well suited to the application of machine learning algorithms. Accordingly, various machine learning techniques are being developed. In recent studies, machine learning models, which utilize synthetic data, are the particular focus when training with deep learning. The use of synthetic training data has many advantages; Securing massive data for training becomes easy and generating exact fault labels is possible with the help of synthetic training data. To interpret real data with the model trained by synthetic data, the synthetic data used for training should be geologically realistic. In this study, we introduce a method to generate realistic synthetic seismic data. Initially, reflectivity models are generated to include realistic fault structures, and then, a one-way wave equation is applied to efficiently generate seismic stack sections. Next, a migration algorithm is used to remove diffraction artifacts and random noise is added to mimic actual field data. A convolutional neural network model based on the U-Net structure is used to verify the generated synthetic data set. From the results of the experiment, we confirm that realistic synthetic data effectively creates a deep learning model that can be applied to field data.

Analysis of Korean Language Parsing System and Speed Improvement of Machine Learning using Feature Module (한국어 의존 관계 분석과 자질 집합 분할을 이용한 기계학습의 성능 개선)

  • Kim, Seong-Jin;Ock, Cheol-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.66-74
    • /
    • 2014
  • Recently a variety of study of Korean parsing system is carried out by many software engineers and linguists. The parsing system mainly uses the method of machine learning or symbol processing paradigm. But the parsing system using machine learning has long training time because the data of Korean sentence is very big. And the system shows the limited recognition rate because the data has self error. In this thesis we design system using feature module which can reduce training time and analyze the recognized rate each the number of training sentences and repetition times. The designed system uses the separated modules and sorted table for binary search. We use the refined 36,090 sentences which is extracted by Sejong Corpus. The training time is decreased about three hours and the comparison of recognized rate is the highest as 84.54% when 10,000 sentences is trained 50 times. When all training sentence(32,481) is trained 10 times, the recognition rate is 82.99%. As a result it is more efficient that the system is used the refined data and is repeated the training until it became the steady state.

In-depth exploration of machine learning algorithms for predicting sidewall displacement in underground caverns

  • Hanan Samadi;Abed Alanazi;Sabih Hashim Muhodir;Shtwai Alsubai;Abdullah Alqahtani;Mehrez Marzougui
    • Geomechanics and Engineering
    • /
    • v.37 no.4
    • /
    • pp.307-321
    • /
    • 2024
  • This paper delves into the critical assessment of predicting sidewall displacement in underground caverns through the application of nine distinct machine learning techniques. The accurate prediction of sidewall displacement is essential for ensuring the structural safety and stability of underground caverns, which are prone to various geological challenges. The dataset utilized in this study comprises a total of 310 data points, each containing 13 relevant parameters extracted from 10 underground cavern projects located in Iran and other regions. To facilitate a comprehensive evaluation, the dataset is evenly divided into training and testing subset. The study employs a diverse array of machine learning models, including recurrent neural network, back-propagation neural network, K-nearest neighbors, normalized and ordinary radial basis function, support vector machine, weight estimation, feed-forward stepwise regression, and fuzzy inference system. These models are leveraged to develop predictive models that can accurately forecast sidewall displacement in underground caverns. The training phase involves utilizing 80% of the dataset (248 data points) to train the models, while the remaining 20% (62 data points) are used for testing and validation purposes. The findings of the study highlight the back-propagation neural network (BPNN) model as the most effective in providing accurate predictions. The BPNN model demonstrates a remarkably high correlation coefficient (R2 = 0.99) and a low error rate (RMSE = 4.27E-05), indicating its superior performance in predicting sidewall displacement in underground caverns. This research contributes valuable insights into the application of machine learning techniques for enhancing the safety and stability of underground structures.

Autonomous-Driving Vehicle Learning Environments using Unity Real-time Engine and End-to-End CNN Approach (유니티 실시간 엔진과 End-to-End CNN 접근법을 이용한 자율주행차 학습환경)

  • Hossain, Sabir;Lee, Deok-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.122-130
    • /
    • 2019
  • Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.

Comparison of theoretical and machine learning models to estimate gamma ray source positions using plastic scintillating optical fiber detector

  • Kim, Jinhong;Kim, Seunghyeon;Song, Siwon;Park, Jae Hyung;Kim, Jin Ho;Lim, Taeseob;Pyeon, Cheol Ho;Lee, Bongsoo
    • Nuclear Engineering and Technology
    • /
    • v.53 no.10
    • /
    • pp.3431-3437
    • /
    • 2021
  • In this study, one-dimensional gamma ray source positions are estimated using a plastic scintillating optical fiber, two photon counters and via data processing with a machine learning algorithm. A nonlinear regression algorithm is used to construct a machine learning model for the position estimation of radioactive sources. The position estimation results of radioactive sources using machine learning are compared with the theoretical position estimation results based on the same measured data. Various tests at the source positions are conducted to determine the improvement in the accuracy of source position estimation. In addition, an evaluation is performed to compare the change in accuracy when varying the number of training datasets. The proposed one-dimensional gamma ray source position estimation system with plastic scintillating fiber using machine learning algorithm can be used as radioactive leakage scanners at disposal sites.

Development of Machine Learning Based Seismic Response Prediction Model for Shear Wall Structure considering Aging Deteriorations (경년열화를 고려한 전단벽 구조물의 기계학습 기반 지진응답 예측모델 개발)

  • Kim, Hyun-Su;Kim, Yukyung;Lee, So Yeon;Jang, Jun Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.24 no.2
    • /
    • pp.83-90
    • /
    • 2024
  • Machine learning is widely applied to various engineering fields. In structural engineering area, machine learning is generally used to predict structural responses of building structures. The aging deterioration of reinforced concrete structure affects its structural behavior. Therefore, the aging deterioration of R.C. structure should be consider to exactly predict seismic responses of the structure. In this study, the machine learning based seismic response prediction model was developed. To this end, four machine learning algorithms were employed and prediction performance of each algorithm was compared. A 3-story coupled shear wall structure was selected as an example structure for numerical simulation. Artificial ground motions were generated based on domestic site characteristics. Elastic modulus, damping ratio and density were changed to considering concrete degradation due to chloride penetration and carbonation, etc. Various intensity measures were used input parameters of the training database. Performance evaluation was performed using metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analysis results show that neural networks and extreme gradient boosting algorithms present good prediction performance.

Multiple Classifier System for Activity Recognition

  • Han, Yong-Koo;Lee, Sung-Young;Lee, young-Koo;Lee, Jae-Won
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.11a
    • /
    • pp.439-443
    • /
    • 2007
  • Nowadays, activity recognition becomes a hot topic in context-aware computing. In activity recognition, machine learning techniques have been widely applied to learn the activity models from labeled activity samples. Most of the existing work uses only one learning method for activity learning and is focused on how to effectively utilize the labeled samples by refining the learning method. However, not much attention has been paid to the use of multiple classifiers for boosting the learning performance. In this paper, we use two methods to generate multiple classifiers. In the first method, the basic learning algorithms for each classifier are the same, while the training data is different (ASTD). In the second method, the basic learning algorithms for each classifier are different, while the training data is the same (ADTS). Experimental results indicate that ADTS can effectively improve activity recognition performance, while ASTD cannot achieve any improvement of the performance. We believe that the classifiers in ADTS are more diverse than those in ASTD.

  • PDF

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.