• Title/Summary/Keyword: and machine-learning

Search Result 5,428, Processing Time 0.035 seconds

Semi-Supervised Learning for Fault Detection and Classification of Plasma Etch Equipment (준지도학습 기반 반도체 공정 이상 상태 감지 및 분류)

  • Lee, Yong Ho;Choi, Jeong Eun;Hong, Sang Jeen
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.121-125
    • /
    • 2020
  • With miniaturization of semiconductor, the manufacturing process become more complex, and undetected small changes in the state of the equipment have unexpectedly changed the process results. Fault detection classification (FDC) system that conducts more active data analysis is feasible to achieve more precise manufacturing process control with advanced machine learning method. However, applying machine learning, especially in supervised learning criteria, requires an arduous data labeling process for the construction of machine learning data. In this paper, we propose a semi-supervised learning to minimize the data labeling work for the data preprocessing. We employed equipment status variable identification (SVID) data and optical emission spectroscopy data (OES) in silicon etch with SF6/O2/Ar gas mixture, and the result shows as high as 95.2% of labeling accuracy with the suggested semi-supervised learning algorithm.

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.

Adaptive Recommendation System for Health Screening based on Machine Learning

  • Kim, Namyun;Kim, Sung-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.1-7
    • /
    • 2020
  • As the demand for health screening increases, there is a need for efficient design of screening items. We build machine learning models for health screening and recommend screening items to provide personalized health care service. When offline, a synthetic data set is generated based on guidelines and clinical results from institutions, and a machine learning model for each screening item is generated. When online, the recommendation server provides a recommendation list of screening items in real time using the customer's health condition and machine learning models. As a result of the performance analysis, the accuracy of the learning model was close to 100%, and server response time was less than 1 second to serve 1,000 users simultaneously. This paper provides an adaptive and automatic recommendation in response to changes in the new screening environment.

Priority-based learning automata in Q-learning random access scheme for cellular M2M communications

  • Shinkafi, Nasir A.;Bello, Lawal M.;Shu'aibu, Dahiru S.;Mitchell, Paul D.
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.787-798
    • /
    • 2021
  • This paper applies learning automata to improve the performance of a Q-learning based random access channel (QL-RACH) scheme in a cellular machine-to-machine (M2M) communication system. A prioritized learning automata QL-RACH (PLA-QL-RACH) access scheme is proposed. The scheme employs a prioritized learning automata technique to improve the throughput performance by minimizing the level of interaction and collision of M2M devices with human-to-human devices sharing the RACH of a cellular system. In addition, this scheme eliminates the excessive punishment suffered by the M2M devices by controlling the administration of a penalty. Simulation results show that the proposed PLA-QL-RACH scheme improves the RACH throughput by approximately 82% and reduces access delay by 79% with faster learning convergence when compared with QL-RACH.

A Data-centric Analysis to Evaluate Suitable Machine-Learning-based Network-Attack Classification Schemes

  • Huong, Truong Thu;Bac, Ta Phuong;Thang, Bui Doan;Long, Dao Minh;Quang, Le Anh;Dan, Nguyen Minh;Hoang, Nguyen Viet
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.169-180
    • /
    • 2021
  • Since machine learning was invented, there have been many different machine learning-based algorithms, from shallow learning to deep learning models, that provide solutions to the classification tasks. But then it poses a problem in choosing a suitable classification algorithm that can improve the classification/detection efficiency for a certain network context. With that comes whether an algorithm provides good performance, why it works in some problems and not in others. In this paper, we present a data-centric analysis to provide a way for selecting a suitable classification algorithm. This data-centric approach is a new viewpoint in exploring relationships between classification performance and facts and figures of data sets.

Design of Disease Prediction Algorithm Applying Machine Learning Time Series Prediction

  • Hye-Kyeong Ko
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.321-328
    • /
    • 2024
  • This paper designs a disease prediction algorithm to diagnose migraine among the types of diseases in advance by learning algorithms using machine learning-based time series analysis. This study utilizes patient data statistics, such as electroencephalogram activity, to design a prediction algorithm to determine the onset signals of migraine symptoms, so that patients can efficiently predict and manage their disease. The results of the study evaluate how accurate the proposed prediction algorithm is in predicting migraine and how quickly it can predict the onset of migraine for disease prevention purposes. In this paper, a machine learning algorithm is used to analyze time series of data indicators used for migraine identification. We designed an algorithm that can efficiently predict and manage patients' diseases by quickly determining the onset signaling symptoms of disease development using existing patient data as input. The experimental results show that the proposed prediction algorithm can accurately predict the occurrence of migraine using machine learning algorithms.

Prediction on the Ratio of Added Value in Industry Using Forecasting Combination based on Machine Learning Method (머신러닝 기법 기반의 예측조합 방법을 활용한 산업 부가가치율 예측 연구)

  • Kim, Jeong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.12
    • /
    • pp.49-57
    • /
    • 2020
  • This study predicts the ratio of added value, which represents the competitiveness of export industries in South Korea, using various machine learning techniques. To enhance the accuracy and stability of prediction, forecast combination technique was applied to predicted values of machine learning techniques. In particular, this study improved the efficiency of the prediction process by selecting key variables out of many variables using recursive feature elimination method and applying them to machine learning techniques. As a result, it was found that the predicted value by the forecast combination method was closer to the actual value than the predicted values of the machine learning techniques. In addition, the forecast combination method showed stable prediction results unlike volatile predicted values by machine learning techniques.

Improved ensemble machine learning framework for seismic fragility analysis of concrete shear wall system

  • Sangwoo Lee;Shinyoung Kwag;Bu-seog Ju
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.313-326
    • /
    • 2023
  • The seismic safety of the shear wall structure can be assessed through seismic fragility analysis, which requires high computational costs in estimating seismic demands. Accordingly, machine learning methods have been applied to such fragility analyses in recent years to reduce the numerical analysis cost, but it still remains a challenging task. Therefore, this study uses the ensemble machine learning method to present an improved framework for developing a more accurate seismic demand model than the existing ones. To this end, a rank-based selection method that enables determining an excellent model among several single machine learning models is presented. In addition, an index that can evaluate the degree of overfitting/underfitting of each model for the selection of an excellent single model is suggested. Furthermore, based on the selected single machine learning model, we propose a method to derive a more accurate ensemble model based on the bagging method. As a result, the seismic demand model for which the proposed framework is applied shows about 3-17% better prediction performance than the existing single machine learning models. Finally, the seismic fragility obtained from the proposed framework shows better accuracy than the existing fragility methods.

Sensor Data Collection & Refining System for Machine Learning-Based Cloud (기계학습 기반의 클라우드를 위한 센서 데이터 수집 및 정제 시스템)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.165-170
    • /
    • 2021
  • Machine learning has recently been applied to research in most areas. This is because the results of machine learning are not determined, but the learning of input data creates the objective function, which enables the determination of new data. In addition, the increase in accumulated data affects the accuracy of machine learning results. The data collected here is an important factor in machine learning. The proposed system is a convergence system of cloud systems and local fog systems for service delivery. Thus, the cloud system provides machine learning and infrastructure for services, while the fog system is located in the middle of the cloud and the user to collect and refine data. The data for this application shall be based on the Sensitive data generated by smart devices. The machine learning technique applied to this system uses SVM algorithm for classification and RNN algorithm for status recognition.

Comparing automated and non-automated machine learning for autism spectrum disorders classification using facial images

  • Elshoky, Basma Ramdan Gamal;Younis, Eman M.G.;Ali, Abdelmgeid Amin;Ibrahim, Osman Ali Sadek
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.613-623
    • /
    • 2022
  • Autism spectrum disorder (ASD) is a developmental disorder associated with cognitive and neurobehavioral disorders. It affects the person's behavior and performance. Autism affects verbal and non-verbal communication in social interactions. Early screening and diagnosis of ASD are essential and helpful for early educational planning and treatment, the provision of family support, and for providing appropriate medical support for the child on time. Thus, developing automated methods for diagnosing ASD is becoming an essential need. Herein, we investigate using various machine learning methods to build predictive models for diagnosing ASD in children using facial images. To achieve this, we used an autistic children dataset containing 2936 facial images of children with autism and typical children. In application, we used classical machine learning methods, such as support vector machine and random forest. In addition to using deep-learning methods, we used a state-of-the-art method, that is, automated machine learning (AutoML). We compared the results obtained from the existing techniques. Consequently, we obtained that AutoML achieved the highest performance of approximately 96% accuracy via the Hyperpot and tree-based pipeline optimization tool optimization. Furthermore, AutoML methods enabled us to easily find the best parameter settings without any human efforts for feature engineering.