• Title/Summary/Keyword: Machine Learning Algorithm

Search Result 1,493, Processing Time 0.025 seconds

A study on applying random forest and gradient boosting algorithm for Chl-a prediction of Daecheong lake (대청호 Chl-a 예측을 위한 random forest와 gradient boosting 알고리즘 적용 연구)

  • Lee, Sang-Min;Kim, Il-Kyu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.6
    • /
    • pp.507-516
    • /
    • 2021
  • In this study, the machine learning which has been widely used in prediction algorithms recently was used. the research point was the CD(chudong) point which was a representative point of Daecheong Lake. Chlorophyll-a(Chl-a) concentration was used as a target variable for algae prediction. to predict the Chl-a concentration, a data set of water quality and quantity factors was consisted. we performed algorithms about random forest and gradient boosting with Python. to perform the algorithms, at first the correlation analysis between Chl-a and water quality and quantity data was studied. we extracted ten factors of high importance for water quality and quantity data. as a result of the algorithm performance index, the gradient boosting showed that RMSE was 2.72 mg/m3 and MSE was 7.40 mg/m3 and R2 was 0.66. as a result of the residual analysis, the analysis result of gradient boosting was excellent. as a result of the algorithm execution, the gradient boosting algorithm was excellent. the gradient boosting algorithm was also excellent with 2.44 mg/m3 of RMSE in the machine learning hyperparameter adjustment result.

A robust approach in prediction of RCFST columns using machine learning algorithm

  • Van-Thanh Pham;Seung-Eock Kim
    • Steel and Composite Structures
    • /
    • v.46 no.2
    • /
    • pp.153-173
    • /
    • 2023
  • Rectangular concrete-filled steel tubular (RCFST) column, a type of concrete-filled steel tubular (CFST), is widely used in compression members of structures because of its advantages. This paper proposes a robust machine learning-based framework for predicting the ultimate compressive strength of RCFST columns under both concentric and eccentric loading. The gradient boosting neural network (GBNN), an efficient and up-to-date ML algorithm, is utilized for developing a predictive model in the proposed framework. A total of 890 experimental data of RCFST columns, which is categorized into two datasets of concentric and eccentric compression, is carefully collected to serve as training and testing purposes. The accuracy of the proposed model is demonstrated by comparing its performance with seven state-of-the-art machine learning methods including decision tree (DT), random forest (RF), support vector machines (SVM), deep learning (DL), adaptive boosting (AdaBoost), extreme gradient boosting (XGBoost), and categorical gradient boosting (CatBoost). Four available design codes, including the European (EC4), American concrete institute (ACI), American institute of steel construction (AISC), and Australian/New Zealand (AS/NZS) are refereed in another comparison. The results demonstrate that the proposed GBNN method is a robust and powerful approach to obtain the ultimate strength of RCFST columns.

Defect Diagnostics of Gas Turbine Engine with Altitude Variation Using SVM and Artificial Neural Network (SVM과 인공신경망을 이용한 고도 변화에 따른 가스터빈 엔진의 결함 진단 연구)

  • Lee Sang-Myeong;Choi Won-Jun;Roh Tae-Seong;Choi Dong-Whan
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2006.05a
    • /
    • pp.209-212
    • /
    • 2006
  • In this study, Support Vector Machine(SVM) and Artificial Neural Network(ANN) are used for developing the defect diagnostic algorithm of the aircraft turbo-shaft engine. Effect of altitude variation on the Defect Diagnostics algorithm has been included and evaluated. Separate learning Algorithm(SLA) suggested with ANN to loam the performance data selectively after classifying the position of defects by SVM improves the classification speed and accuracy.

  • PDF

Improving Learning Performance of Support Vector Machine using the Kernel Relaxation and the Dynamic Momentum (Kernel Relaxation과 동적 모멘트를 조합한 Support Vector Machine의 학습 성능 향상)

  • Kim, Eun-Mi;Lee, Bae-Ho
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.735-744
    • /
    • 2002
  • This paper proposes learning performance improvement of support vector machine using the kernel relaxation and the dynamic momentum. The dynamic momentum is reflected to different momentum according to current state. While static momentum is equally influenced on the whole, the proposed dynamic momentum algorithm can control to the convergence rate and performance according to the change of the dynamic momentum by training. The proposed algorithm has been applied to the kernel relaxation as the new sequential learning method of support vector machine presented recently. The proposed algorithm has been applied to the SONAR data which is used to the standard classification problems for evaluating neural network. The simulation results of proposed algorithm have better the convergence rate and performance than those using kernel relaxation and static momentum, respectively.

Comparison of CT Exposure Dose Prediction Models Using Machine Learning-based Body Measurement Information (머신러닝 기반 신체 계측정보를 이용한 CT 피폭선량 예측모델 비교)

  • Hong, Dong-Hee
    • Journal of radiological science and technology
    • /
    • v.43 no.6
    • /
    • pp.503-509
    • /
    • 2020
  • This study aims to develop a patient-specific radiation exposure dose prediction model based on anthropometric data that can be easily measurable during CT examination, and to be used as basic data for DRL setting and radiation dose management system in the future. In addition, among the machine learning algorithms, the most suitable model for predicting exposure doses is presented. The data used in this study were chest CT scan data, and a data set was constructed based on the data including the patient's anthropometric data. In the pre-processing and sample selection of the data, out of the total number of samples of 250 samples, only chest CT scans were performed without using a contrast agent, and 110 samples including height and weight variables were extracted. Of the 110 samples extracted, 66% was used as a training set, and the remaining 44% were used as a test set for verification. The exposure dose was predicted through random forest, linear regression analysis, and SVM algorithm using Orange version 3.26.0, an open software as a machine learning algorithm. Results Algorithm model prediction accuracy was R^2 0.840 for random forest, R^2 0.969 for linear regression analysis, and R^2 0.189 for SVM. As a result of verifying the prediction rate of the algorithm model, the random forest is the highest with R^2 0.986 of the random forest, R^2 0.973 of the linear regression analysis, and R^2 of 0.204 of the SVM, indicating that the model has the best predictive power.

A novel visual tracking system with adaptive incremental extreme learning machine

  • Wang, Zhihui;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.451-465
    • /
    • 2017
  • This paper presents a novel discriminative visual tracking algorithm with an adaptive incremental extreme learning machine. The parameters for an adaptive incremental extreme learning machine are initialized at the first frame with a target that is manually assigned. At each frame, the training samples are collected and random Haar-like features are extracted. The proposed tracker updates the overall output weights for each frame, and the updated tracker is used to estimate the new location of the target in the next frame. The adaptive learning rate for the update of the overall output weights is estimated by using the confidence of the predicted target location at the current frame. Our experimental results indicate that the proposed tracker can manage various difficulties and can achieve better performance than other state-of-the-art trackers.

Two-Agent Single-Machine Scheduling with Linear Job-Dependent Position-Based Learning Effects (작업 종속 및 위치기반 선형학습효과를 갖는 2-에이전트 단일기계 스케줄링)

  • Choi, Jin Young
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.169-180
    • /
    • 2015
  • Recently, scheduling problems with position-dependent processing times have received considerable attention in the literature, where the processing times of jobs are dependent on the processing sequences. However, they did not consider cases in which each processed job has different learning or aging ratios. This means that the actual processing time for a job can be determined not only by the processing sequence, but also by the learning/aging ratio, which can reflect the degree of processing difficulties in subsequent jobs. Motivated by these remarks, in this paper, we consider a two-agent single-machine scheduling problem with linear job-dependent position-based learning effects, where two agents compete to use a common single machine and each job has a different learning ratio. Specifically, we take into account two different objective functions for two agents: one agent minimizes the total weighted completion time, and the other restricts the makespan to less than an upper bound. After formally defining the problem by developing a mixed integer non-linear programming formulation, we devise a branch-and-bound (B&B) algorithm to give optimal solutions by developing four dominance properties based on a pairwise interchange comparison and four properties regarding the feasibility of a considered sequence. We suggest a lower bound to speed up the search procedure in the B&B algorithm by fathoming any non-prominent nodes. As this problem is at least NP-hard, we suggest efficient genetic algorithms using different methods to generate the initial population and two crossover operations. Computational results show that the proposed algorithms are efficient to obtain near-optimal solutions.

Analysis of Machine Learning Research Patterns from a Quality Management Perspective (품질경영 관점에서 머신러닝 연구 패턴 분석)

  • Ye-eun Kim;Ho Jun Song;Wan Seon Shin
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.77-93
    • /
    • 2024
  • Purpose: The purpose of this study is to examine machine learning use cases in manufacturing companies from a digital quality management (DQM) perspective and to analyze and present machine learning research patterns from a quality management perspective. Methods: This study was conducted based on systematic literature review methodology. A comprehensive and systematic review was conducted on manufacturing papers covering the overall quality management process from 2015 to 2022. A total of 3 research questions were established according to the goal of the study, and a total of 5 literature selection criteria were set, based on which approximately 110 research papers were selected. Based on the selected papers, machine learning research patterns according to quality management were analyzed. Results: The results of this study are as follows. Among quality management activities, it can be seen that research on the use of machine learning technology is being most actively conducted in relation to quality defect analysis. It suggests that research on the use of NN-based algorithms is taking place most actively compared to other machine learning methods across quality management activities. Lastly, this study suggests that the unique characteristics of each machine learning algorithm should be considered for efficient and effective quality management in the manufacturing industry. Conclusion: This study is significant in that it presents machine learning research trends from an industrial perspective from a digital quality management perspective and lays the foundation for presenting optimal machine learning algorithms in future quality management activities.

A Multiple Instance Learning Problem Approach Model to Anomaly Network Intrusion Detection

  • Weon, Ill-Young;Song, Doo-Heon;Ko, Sung-Bum;Lee, Chang-Hoon
    • Journal of Information Processing Systems
    • /
    • v.1 no.1 s.1
    • /
    • pp.14-21
    • /
    • 2005
  • Even though mainly statistical methods have been used in anomaly network intrusion detection, to detect various attack types, machine learning based anomaly detection was introduced. Machine learning based anomaly detection started from research applying traditional learning algorithms of artificial intelligence to intrusion detection. However, detection rates of these methods are not satisfactory. Especially, high false positive and repeated alarms about the same attack are problems. The main reason for this is that one packet is used as a basic learning unit. Most attacks consist of more than one packet. In addition, an attack does not lead to a consecutive packet stream. Therefore, with grouping of related packets, a new approach of group-based learning and detection is needed. This type of approach is similar to that of multiple-instance problems in the artificial intelligence community, which cannot clearly classify one instance, but classification of a group is possible. We suggest group generation algorithm grouping related packets, and a learning algorithm based on a unit of such group. To verify the usefulness of the suggested algorithm, 1998 DARPA data was used and the results show that our approach is quite useful.

Predictive maintenance architecture development for nuclear infrastructure using machine learning

  • Gohel, Hardik A.;Upadhyay, Himanshu;Lagos, Leonel;Cooper, Kevin;Sanzetenea, Andrew
    • Nuclear Engineering and Technology
    • /
    • v.52 no.7
    • /
    • pp.1436-1442
    • /
    • 2020
  • Nuclear infrastructure systems play an important role in national security. The functions and missions of nuclear infrastructure systems are vital to government, businesses, society and citizen's lives. It is crucial to design nuclear infrastructure for scalability, reliability and robustness. To do this, we can use machine learning, which is a state of the art technology used in various fields ranging from voice recognition, Internet of Things (IoT) device management and autonomous vehicles. In this paper, we propose to design and develop a machine learning algorithm to perform predictive maintenance of nuclear infrastructure. Support vector machine and logistic regression algorithms will be used to perform the prediction. These machine learning techniques have been used to explore and compare rare events that could occur in nuclear infrastructure. As per our literature review, support vector machines provide better performance metrics. In this paper, we have performed parameter optimization for both algorithms mentioned. Existing research has been done in conditions with a great volume of data, but this paper presents a novel approach to correlate nuclear infrastructure data samples where the density of probability is very low. This paper also identifies the respective motivations and distinguishes between benefits and drawbacks of the selected machine learning algorithms.