• Title/Summary/Keyword: Learning Algorithms

Search Result 2,317, Processing Time 0.029 seconds

Trends in image processing techniques applied to corrosion detection and analysis (부식 검출과 분석에 적용한 영상 처리 기술 동향)

  • Beomsoo Kim;Jaesung Kwon;Jeonghyeon Yang
    • Journal of the Korean institute of surface engineering
    • /
    • v.56 no.6
    • /
    • pp.353-370
    • /
    • 2023
  • Corrosion detection and analysis is a very important topic in reducing costs and preventing disasters. Recently, image processing techniques have been widely applied to corrosion identification and analysis. In this work, we briefly introduces traditional image processing techniques and machine learning algorithms applied to detect or analyze corrosion in various fields. Recently, machine learning, especially CNN-based algorithms, have been widely applied to corrosion detection. Additionally, research on applying machine learning to region segmentation is very actively underway. The corrosion is reddish and brown in color and has a very irregular shape, so a combination of techniques that consider color and texture, various mathematical techniques, and machine learning algorithms are used to detect and analyze corrosion. We present examples of the application of traditional image processing techniques and machine learning to corrosion detection and analysis.

Design of Disease Prediction Algorithm Applying Machine Learning Time Series Prediction

  • Hye-Kyeong Ko
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.321-328
    • /
    • 2024
  • This paper designs a disease prediction algorithm to diagnose migraine among the types of diseases in advance by learning algorithms using machine learning-based time series analysis. This study utilizes patient data statistics, such as electroencephalogram activity, to design a prediction algorithm to determine the onset signals of migraine symptoms, so that patients can efficiently predict and manage their disease. The results of the study evaluate how accurate the proposed prediction algorithm is in predicting migraine and how quickly it can predict the onset of migraine for disease prevention purposes. In this paper, a machine learning algorithm is used to analyze time series of data indicators used for migraine identification. We designed an algorithm that can efficiently predict and manage patients' diseases by quickly determining the onset signaling symptoms of disease development using existing patient data as input. The experimental results show that the proposed prediction algorithm can accurately predict the occurrence of migraine using machine learning algorithms.

Performance Comparison of Algorithm through Classification of Parkinson's Disease According to the Speech Feature (음성 특징에 따른 파킨슨병 분류를 위한 알고리즘 성능 비교)

  • Chung, Jae Woo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.209-214
    • /
    • 2016
  • The purpose of this study was to classify healty persons and Parkinson disease patients from the vocal characteristics of healty persons and the of Parkinson disease patients using Machine Learning algorithms. So, we compared the most widely used algorithms for Machine Learning such as J48 algorithm and REPTree algorithm. In order to evaluate the classification performance of the two algorithms, the results were compared with depending on vocal characteristics. The classification performance of depending on vocal characteristics show 88.72% and 84.62%. The test results showed that the J48 algorithms was superior to REPTree algorithms.

Design of a Fuzzy Controller Using Genetic Algorithms Employing Random Signal-Based Learning (랜덤 신호 기반 학습의 유전 알고리즘을 이용한 퍼지 제어기의 설계)

  • Han, Chang-Uk;Park, Jeong-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.2
    • /
    • pp.131-137
    • /
    • 2001
  • Traditional genetic algorithms, though robust, are generally not the most successful optimization algorithm on only particular domian. Hybridizing a genetic algorithm with other algorithms can produce better performance than both the genetic algorithm and the other algorithms. This paper describes the application of random signal-based learning to a genetic algorithm in order to get well tuned fuzzy rules. The key of tis approach is to adjust both the width and the center of membership functions so that the tuned rule-based fuzzy controller can generate the desired performance. The effectiveness of the proposed algorithm is verified by computer simulation.

  • PDF

Learning Algorithms in AI System and Services

  • Jeong, Young-Sik;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1029-1035
    • /
    • 2019
  • In recent years, artificial intelligence (AI) services have become one of the most essential parts to extend human capabilities in various fields such as face recognition for security, weather prediction, and so on. Various learning algorithms for existing AI services are utilized, such as classification, regression, and deep learning, to increase accuracy and efficiency for humans. Nonetheless, these services face many challenges such as fake news spread on social media, stock selection, and volatility delay in stock prediction systems and inaccurate movie-based recommendation systems. In this paper, various algorithms are presented to mitigate these issues in different systems and services. Convolutional neural network algorithms are used for detecting fake news in Korean language with a Word-Embedded model. It is based on k-clique and data mining and increased accuracy in personalized recommendation-based services stock selection and volatility delay in stock prediction. Other algorithms like multi-level fusion processing address problems of lack of real-time database.

Machine learning-based prediction of wind forces on CAARC standard tall buildings

  • Yi Li;Jie-Ting Yin;Fu-Bin Chen;Qiu-Sheng Li
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.355-366
    • /
    • 2023
  • Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the feasibility of machine learning algorithms for prediction of wind loads on high-rise buildings, this study took the exposure category type, wind direction and the height of local wind force as the input features and adopted four different machine learning algorithms including k-nearest neighbor (KNN), support vector machine (SVM), gradient boosting regression tree (GBRT) and extreme gradient (XG) boosting to predict wind force coefficients of CAARC standard tall building model. All the hyper-parameters of four ML algorithms are optimized by tree-structured Parzen estimator (TPE). The result shows that mean drag force coefficients and RMS lift force coefficients can be well predicted by the GBRT algorithm model while the RMS drag force coefficients can be forecasted preferably by the XG boosting algorithm model. The proposed machine learning based algorithms for wind loads prediction can be an alternative of traditional wind tunnel tests and computational fluid dynamic simulations.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Reinforcement Learning Using State Space Compression (상태 공간 압축을 이용한 강화학습)

  • Kim, Byeong-Cheon;Yun, Byeong-Ju
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.633-640
    • /
    • 1999
  • Reinforcement learning performs learning through interacting with trial-and-error in dynamic environment. Therefore, in dynamic environment, reinforcement learning method like Q-learning and TD(Temporal Difference)-learning are faster in learning than the conventional stochastic learning method. However, because many of the proposed reinforcement learning algorithms are given the reinforcement value only when the learning agent has reached its goal state, most of the reinforcement algorithms converge to the optimal solution too slowly. In this paper, we present COMREL(COMpressed REinforcement Learning) algorithm for finding the shortest path fast in a maze environment, select the candidate states that can guide the shortest path in compressed maze environment, and learn only the candidate states to find the shortest path. After comparing COMREL algorithm with the already existing Q-learning and Priortized Sweeping algorithm, we could see that the learning time shortened very much.

  • PDF

Recent deep learning methods for tabular data

  • Yejin Hwang;Jongwoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.215-226
    • /
    • 2023
  • Deep learning has made great strides in the field of unstructured data such as text, images, and audio. However, in the case of tabular data analysis, machine learning algorithms such as ensemble methods are still better than deep learning. To keep up with the performance of machine learning algorithms with good predictive power, several deep learning methods for tabular data have been proposed recently. In this paper, we review the latest deep learning models for tabular data and compare the performances of these models using several datasets. In addition, we also compare the latest boosting methods to these deep learning methods and suggest the guidelines to the users, who analyze tabular datasets. In regression, machine learning methods are better than deep learning methods. But for the classification problems, deep learning methods perform better than the machine learning methods in some cases.

Design and Implementation of Malicious URL Prediction System based on Multiple Machine Learning Algorithms (다중 머신러닝 알고리즘을 이용한 악성 URL 예측 시스템 설계 및 구현)

  • Kang, Hong Koo;Shin, Sam Shin;Kim, Dae Yeob;Park, Soon Tai
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.11
    • /
    • pp.1396-1405
    • /
    • 2020
  • Cyber threats such as forced personal information collection and distribution of malicious codes using malicious URLs continue to occur. In order to cope with such cyber threats, a security technologies that quickly detects malicious URLs and prevents damage are required. In a web environment, malicious URLs have various forms and are created and deleted from time to time, so there is a limit to the response as a method of detecting or filtering by signature matching. Recently, researches on detecting and predicting malicious URLs using machine learning techniques have been actively conducted. Existing studies have proposed various features and machine learning algorithms for predicting malicious URLs, but most of them are only suggesting specialized algorithms by supplementing features and preprocessing, so it is difficult to sufficiently reflect the strengths of various machine learning algorithms. In this paper, a system for predicting malicious URLs using multiple machine learning algorithms was proposed, and an experiment was performed to combine the prediction results of multiple machine learning models to increase the accuracy of predicting malicious URLs. Through experiments, it was proved that the combination of multiple models is useful in improving the prediction performance compared to a single model.