• 제목/요약/키워드: Machine Learning Algorithm

검색결과 1,526건 처리시간 0.023초

준지도학습 기반 반도체 공정 이상 상태 감지 및 분류 (Semi-Supervised Learning for Fault Detection and Classification of Plasma Etch Equipment)

  • 이용호;최정은;홍상진
    • 반도체디스플레이기술학회지
    • /
    • 제19권4호
    • /
    • pp.121-125
    • /
    • 2020
  • With miniaturization of semiconductor, the manufacturing process become more complex, and undetected small changes in the state of the equipment have unexpectedly changed the process results. Fault detection classification (FDC) system that conducts more active data analysis is feasible to achieve more precise manufacturing process control with advanced machine learning method. However, applying machine learning, especially in supervised learning criteria, requires an arduous data labeling process for the construction of machine learning data. In this paper, we propose a semi-supervised learning to minimize the data labeling work for the data preprocessing. We employed equipment status variable identification (SVID) data and optical emission spectroscopy data (OES) in silicon etch with SF6/O2/Ar gas mixture, and the result shows as high as 95.2% of labeling accuracy with the suggested semi-supervised learning algorithm.

Thermal post-buckling measurement of the advanced nanocomposites reinforced concrete systems via both mathematical modeling and machine learning algorithm

  • Minggui Zhou;Gongxing Yan;Danping Hu;Haitham A. Mahmoud
    • Advances in nano research
    • /
    • 제16권6호
    • /
    • pp.623-638
    • /
    • 2024
  • This study investigates the thermal post-buckling behavior of concrete eccentric annular sector plates reinforced with graphene oxide powders (GOPs). Employing the minimum total potential energy principle, the plates' stability and response under thermal loads are analyzed. The Haber-Schaim foundation model is utilized to account for the support conditions, while the transform differential quadrature method (TDQM) is applied to solve the governing differential equations efficiently. The integration of GOPs significantly enhances the mechanical properties and stability of the plates, making them suitable for advanced engineering applications. Numerical results demonstrate the critical thermal loads and post-buckling paths, providing valuable insights into the design and optimization of such reinforced structures. This study presents a machine learning algorithm designed to predict complex engineering phenomena using datasets derived from presented mathematical modeling. By leveraging advanced data analytics and machine learning techniques, the algorithm effectively captures and learns intricate patterns from the mathematical models, providing accurate and efficient predictions. The methodology involves generating comprehensive datasets from mathematical simulations, which are then used to train the machine learning model. The trained model is capable of predicting various engineering outcomes, such as stress, strain, and thermal responses, with high precision. This approach significantly reduces the computational time and resources required for traditional simulations, enabling rapid and reliable analysis. This comprehensive approach offers a robust framework for predicting the thermal post-buckling behavior of reinforced concrete plates, contributing to the development of resilient and efficient structural components in civil engineering.

딥러닝과 앙상블 머신러닝 모형의 하천 탁도 예측 특성 비교 연구 (Comparative characteristic of ensemble machine learning and deep learning models for turbidity prediction in a river)

  • 박정수
    • 상하수도학회지
    • /
    • 제35권1호
    • /
    • pp.83-91
    • /
    • 2021
  • The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.

실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구 (A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics)

  • 구진희
    • 융합정보논문지
    • /
    • 제8권1호
    • /
    • pp.201-206
    • /
    • 2018
  • 최근 인공지능을 구현하기 위한 기술들이 보편화되면서 특히, 기계 학습이 폭넓게 사용되고 있다. 기계 학습은 대량의 데이터를 수집하고 일괄적으로 처리하며 최종 조치를 취할 수 있는 통찰력을 제공하나, 작업의 효과가 즉시 학습 과정에 통합되지는 않는다. 본 연구에서는 비즈니스의 큰 이슈로서 실시간 데이터 분석의 성능을 개선하기 위한 적응형 학습 모델을 제안하였다. 적응형 학습은 데이터세트의 복잡성에 적응하여 앙상블을 생성하고 알고리즘은 샘플링 할 최적의 데이터 포인트를 결정하는데 필요한 데이터를 사용한다. 6개의 표준 데이터세트를 대상으로 한 실험에서 적응형 학습 모델은 학습 시간과 정확도에서 분류를 위한 단순 기계 학습 모델보다 성능이 우수하였다. 특히 서포트 벡터 머신은 모든 앙상블의 후단에서 우수한 성능을 보였다. 적응형 학습 모델은 시간이 지남에 따라 다양한 매개변수들의 변화에 대한 추론을 적응적으로 업데이트가 필요한 문제에 폭넓게 적용될 수 있을 것으로 기대한다.

농림위성을 위한 기계학습을 활용한 복사전달모델기반 대기보정 모사 알고리즘 개발 및 검증: 식생 지역을 위주로 (Machine Learning-Based Atmospheric Correction Based on Radiative Transfer Modeling Using Sentinel-2 MSI Data and ItsValidation Focusing on Forest)

  • 강유진;김예진;임정호;임중빈
    • 대한원격탐사학회지
    • /
    • 제39권5_3호
    • /
    • pp.891-907
    • /
    • 2023
  • Compact Advanced Satellite 500-4 (CAS500-4) is scheduled to be launched to collect high spatial resolution data focusing on vegetation applications. To achieve this goal, accurate surface reflectance retrieval through atmospheric correction is crucial. Therefore, a machine learning-based atmospheric correction algorithm was developed to simulate atmospheric correction from a radiative transfer model using Sentinel-2 data that have similarspectral characteristics as CAS500-4. The algorithm was then evaluated mainly for forest areas. Utilizing the atmospheric correction parameters extracted from Sentinel-2 and GEOKOMPSAT-2A (GK-2A), the atmospheric correction algorithm was developed based on Random Forest and Light Gradient Boosting Machine (LGBM). Between the two machine learning techniques, LGBM performed better when considering both accuracy and efficiency. Except for one station, the results had a correlation coefficient of more than 0.91 and well-reflected temporal variations of the Normalized Difference Vegetation Index (i.e., vegetation phenology). GK-2A provides Aerosol Optical Depth (AOD) and water vapor, which are essential parameters for atmospheric correction, but additional processing should be required in the future to mitigate the problem caused by their many missing values. This study provided the basis for the atmospheric correction of CAS500-4 by developing a machine learning-based atmospheric correction simulation algorithm.

머신러닝 기법을 활용한 논 순용수량 예측 (Prediction of Net Irrigation Water Requirement in paddy field Based on Machine Learning)

  • 김수진;배승종;장민원
    • 농촌계획
    • /
    • 제28권4호
    • /
    • pp.105-117
    • /
    • 2022
  • This study tested SVM(support vector machine), RF(random forest), and ANN(artificial neural network) machine-learning models that can predict net irrigation water requirements in paddy fields. For the Jeonju and Jeongeup meteorological stations, the net irrigation water requirement was calculated using K-HAS from 1981 to 2021 and set as the label. For each algorithm, twelve models were constructed based on cumulative precipitation, precipitation, crop evapotranspiration, and month. Compared to the CE model, the R2 of the CEP model was higher, and MAE, RMSE, and MSE were lower. Comprehensively considering learning performance and learning time, it is judged that the RF algorithm has the best usability and predictive power of five-days is better than three-days. The results of this study are expected to provide the scientific information necessary for the decision-making of on-site water managers is expected to be possible through the connection with weather forecast data. In the future, if the actual amount of irrigation and supply are measured, it is necessary to develop a learning model that reflects this.

스마트 제어알고리즘 개발을 위한 강화학습 리워드 설계 (Reward Design of Reinforcement Learning for Development of Smart Control Algorithm)

  • 김현수;윤기용
    • 한국공간구조학회논문집
    • /
    • 제22권2호
    • /
    • pp.39-46
    • /
    • 2022
  • Recently, machine learning is widely used to solve optimization problems in various engineering fields. In this study, machine learning is applied to development of a control algorithm for a smart control device for reduction of seismic responses. For this purpose, Deep Q-network (DQN) out of reinforcement learning algorithms was employed to develop control algorithm. A single degree of freedom (SDOF) structure with a smart tuned mass damper (TMD) was used as an example structure. A smart TMD system was composed of MR (magnetorheological) damper instead of passive damper. Reward design of reinforcement learning mainly affects the control performance of the smart TMD. Various hyper-parameters were investigated to optimize the control performance of DQN-based control algorithm. Usually, decrease of the time step for numerical simulation is desirable to increase the accuracy of simulation results. However, the numerical simulation results presented that decrease of the time step for reward calculation might decrease the control performance of DQN-based control algorithm. Therefore, a proper time step for reward calculation should be selected in a DQN training process.

기계학습을 이용한 Joint Torque Sensor 기반의 충돌 감지 알고리즘 비교 연구 (A Comparative Study on Collision Detection Algorithms based on Joint Torque Sensor using Machine Learning)

  • 조성현;권우경
    • 로봇학회논문지
    • /
    • 제15권2호
    • /
    • pp.169-176
    • /
    • 2020
  • This paper studied the collision detection of robot manipulators for safe collaboration in human-robot interaction. Based on sensor-based collision detection, external torque is detached from subtracting robot dynamics. To detect collision using joint torque sensor data, a comparative study was conducted using data-based machine learning algorithm. Data was collected from the actual 3 degree-of-freedom (DOF) robot manipulator, and the data was labeled by threshold and handwork. Using support vector machine (SVM), decision tree and k-nearest neighbors KNN method, we derive the optimal parameters of each algorithm and compare the collision classification performance. The simulation results are analyzed for each method, and we confirmed that by an optimal collision status detection model with high prediction accuracy.

딥 러닝을 이용한 버그 담당자 자동 배정 연구 (Study on Automatic Bug Triage using Deep Learning)

  • 이선로;김혜민;이찬근;이기성
    • 정보과학회 논문지
    • /
    • 제44권11호
    • /
    • pp.1156-1164
    • /
    • 2017
  • 기존의 버그 담당자 자동 배정 연구들은 대부분 기계학습 알고리즘을 기반으로 예측 시스템을 구축하는 방식이었다. 따라서, 고성능의 기계학습 모델을 적용하는 것이 담당자 자동 배정 시스템 성능의 핵심이 된다고 할 수 있으며 관련 연구에서는 높은 성능을 보이는 SVM, Naive Bayes 등의 기계학습 모델들이 주로 사용되고 있다. 본 논문에서는 기계학습 분야에서 최근 좋은 성능을 보이고 있는 딥 러닝을 버그 담당자 자동 배정에 적용하고 그 성능을 평가한다. 실험 결과, 딥 러닝 기반 Bug Triage 시스템이 활성 개발자 대상 실험에서 48%의 정확도를 달성했으며 이는 기존의 기계학습 대비 최대 69%향상된 결과이다.

Transductive SVM을 위한 분지-한계 알고리즘 (A Branch-and-Bound Algorithm for Finding an Optimal Solution of Transductive Support Vector Machines)

  • 박찬규
    • 한국경영과학회지
    • /
    • 제31권2호
    • /
    • pp.69-85
    • /
    • 2006
  • Transductive Support Vector Machine(TSVM) is one of semi-supervised learning algorithms which exploit the domain structure of the whole data by considering labeled and unlabeled data together. Although it was proposed several years ago, there has been no efficient algorithm which can handle problems with more than hundreds of training examples. In this paper, we propose an efficient branch-and-bound algorithm which can solve large-scale TSVM problems with thousands of training examples. The proposed algorithm uses two bounding techniques: min-cut bound and reduced SVM bound. The min-cut bound is derived from a capacitated graph whose cuts represent a lower bound to the optimal objective function value of the dual problem. The reduced SVM bound is obtained by constructing the SVM problem with only labeled data. Experimental results show that the accuracy rate of TSVM can be significantly improved by learning from the optimal solution of TSVM, rather than an approximated solution.