• 제목/요약/키워드: Machine learning algorithm

검색결과 1,505건 처리시간 0.026초

연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현 (Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning)

  • 김영준;김태완;김수현;이성재;김태현
    • 대한임베디드공학회논문지
    • /
    • 제19권3호
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

Deep Q-Network를 이용한 준능동 제어알고리즘 개발 (Development of Semi-Active Control Algorithm Using Deep Q-Network)

  • 김현수;강주원
    • 한국공간구조학회논문집
    • /
    • 제21권1호
    • /
    • pp.79-86
    • /
    • 2021
  • Control performance of a smart tuned mass damper (TMD) mainly depends on control algorithms. A lot of control strategies have been proposed for semi-active control devices. Recently, machine learning begins to be applied to development of vibration control algorithm. In this study, a reinforcement learning among machine learning techniques was employed to develop a semi-active control algorithm for a smart TMD. The smart TMD was composed of magnetorheological damper in this study. For this purpose, an 11-story building structure with a smart TMD was selected to construct a reinforcement learning environment. A time history analysis of the example structure subject to earthquake excitation was conducted in the reinforcement learning procedure. Deep Q-network (DQN) among various reinforcement learning algorithms was used to make a learning agent. The command voltage sent to the MR damper is determined by the action produced by the DQN. Parametric studies on hyper-parameters of DQN were performed by numerical simulations. After appropriate training iteration of the DQN model with proper hyper-parameters, the DQN model for control of seismic responses of the example structure with smart TMD was developed. The developed DQN model can effectively control smart TMD to reduce seismic responses of the example structure.

Machine Learning Model to Predict Osteoporotic Spine with Hounsfield Units on Lumbar Computed Tomography

  • Nam, Kyoung Hyup;Seo, Il;Kim, Dong Hwan;Lee, Jae Il;Choi, Byung Kwan;Han, In Ho
    • Journal of Korean Neurosurgical Society
    • /
    • 제62권4호
    • /
    • pp.442-449
    • /
    • 2019
  • Objective : Bone mineral density (BMD) is an important consideration during fusion surgery. Although dual X-ray absorptiometry is considered as the gold standard for assessing BMD, quantitative computed tomography (QCT) provides more accurate data in spine osteoporosis. However, QCT has the disadvantage of additional radiation hazard and cost. The present study was to demonstrate the utility of artificial intelligence and machine learning algorithm for assessing osteoporosis using Hounsfield units (HU) of preoperative lumbar CT coupling with data of QCT. Methods : We reviewed 70 patients undergoing both QCT and conventional lumbar CT for spine surgery. The T-scores of 198 lumbar vertebra was assessed in QCT and the HU of vertebral body at the same level were measured in conventional CT by the picture archiving and communication system (PACS) system. A multiple regression algorithm was applied to predict the T-score using three independent variables (age, sex, and HU of vertebral body on conventional CT) coupling with T-score of QCT. Next, a logistic regression algorithm was applied to predict osteoporotic or non-osteoporotic vertebra. The Tensor flow and Python were used as the machine learning tools. The Tensor flow user interface developed in our institute was used for easy code generation. Results : The predictive model with multiple regression algorithm estimated similar T-scores with data of QCT. HU demonstrates the similar results as QCT without the discordance in only one non-osteoporotic vertebra that indicated osteoporosis. From the training set, the predictive model classified the lumbar vertebra into two groups (osteoporotic vs. non-osteoporotic spine) with 88.0% accuracy. In a test set of 40 vertebrae, classification accuracy was 92.5% when the learning rate was 0.0001 (precision, 0.939; recall, 0.969; F1 score, 0.954; area under the curve, 0.900). Conclusion : This study is a simple machine learning model applicable in the spine research field. The machine learning model can predict the T-score and osteoporotic vertebrae solely by measuring the HU of conventional CT, and this would help spine surgeons not to under-estimate the osteoporotic spine preoperatively. If applied to a bigger data set, we believe the predictive accuracy of our model will further increase. We propose that machine learning is an important modality of the medical research field.

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권2호
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

IoT-based systemic lupus erythematosus prediction model using hybrid genetic algorithm integrated with ANN

  • Edison Prabhu K;Surendran D
    • ETRI Journal
    • /
    • 제45권4호
    • /
    • pp.594-602
    • /
    • 2023
  • Internet of things (IoT) is commonly employed to detect different kinds of diseases in the health sector. Systemic lupus erythematosus (SLE) is an autoimmune illness that occurs when the body's immune system attacks its own connective tissues and organs. Because of the complicated interconnections between illness trigger exposure levels across time, humans have trouble predicting SLE symptom severity levels. An effective automated machine learning model that intakes IoT data was created to forecast SLE symptoms to solve this issue. IoT has several advantages in the healthcare industry, including interoperability, information exchange, machine-to-machine networking, and data transmission. An SLE symptom-predicting machine learning model was designed by integrating the hybrid marine predator algorithm and atom search optimization with an artificial neural network. The network is trained by the Gene Expression Omnibus dataset as input, and the patients' data are used as input to predict symptoms. The experimental results demonstrate that the proposed model's accuracy is higher than state-of-the-art prediction models at approximately 99.70%.

Comparative Analysis of Machine Learning Techniques for IoT Anomaly Detection Using the NSL-KDD Dataset

  • Zaryn, Good;Waleed, Farag;Xin-Wen, Wu;Soundararajan, Ezekiel;Maria, Balega;Franklin, May;Alicia, Deak
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.46-52
    • /
    • 2023
  • With billions of IoT (Internet of Things) devices populating various emerging applications across the world, detecting anomalies on these devices has become incredibly important. Advanced Intrusion Detection Systems (IDS) are trained to detect abnormal network traffic, and Machine Learning (ML) algorithms are used to create detection models. In this paper, the NSL-KDD dataset was adopted to comparatively study the performance and efficiency of IoT anomaly detection models. The dataset was developed for various research purposes and is especially useful for anomaly detection. This data was used with typical machine learning algorithms including eXtreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Deep Convolutional Neural Networks (DCNN) to identify and classify any anomalies present within the IoT applications. Our research results show that the XGBoost algorithm outperformed both the SVM and DCNN algorithms achieving the highest accuracy. In our research, each algorithm was assessed based on accuracy, precision, recall, and F1 score. Furthermore, we obtained interesting results on the execution time taken for each algorithm when running the anomaly detection. Precisely, the XGBoost algorithm was 425.53% faster when compared to the SVM algorithm and 2,075.49% faster than the DCNN algorithm. According to our experimental testing, XGBoost is the most accurate and efficient method.

정보 유출 탐지를 위한 머신 러닝 기반 내부자 행위 분석 연구 (A Study on the Insider Behavior Analysis Using Machine Learning for Detecting Information Leakage)

  • 고장혁;이동호
    • 디지털산업정보학회논문지
    • /
    • 제13권2호
    • /
    • pp.1-11
    • /
    • 2017
  • In this paper, we design and implement PADIL(Prediction And Detection of Information Leakage) system that predicts and detect information leakage behavior of insider by analyzing network traffic and applying a variety of machine learning methods. we defined the five-level information leakage model(Reconnaissance, Scanning, Access and Escalation, Exfiltration, Obfuscation) by referring to the cyber kill-chain model. In order to perform the machine learning for detecting information leakage, PADIL system extracts various features by analyzing the network traffic and extracts the behavioral features by comparing it with the personal profile information and extracts information leakage level features. We tested various machine learning methods and as a result, the DecisionTree algorithm showed excellent performance in information leakage detection and we showed that performance can be further improved by fine feature selection.

Pipeline wall thinning rate prediction model based on machine learning

  • Moon, Seongin;Kim, Kyungmo;Lee, Gyeong-Geun;Yu, Yongkyun;Kim, Dong-Jin
    • Nuclear Engineering and Technology
    • /
    • 제53권12호
    • /
    • pp.4060-4066
    • /
    • 2021
  • Flow-accelerated corrosion (FAC) of carbon steel piping is a significant problem in nuclear power plants. The basic process of FAC is currently understood relatively well; however, the accuracy of prediction models of the wall-thinning rate under an FAC environment is not reliable. Herein, we propose a methodology to construct pipe wall-thinning rate prediction models using artificial neural networks and a convolutional neural network, which is confined to a straight pipe without geometric changes. Furthermore, a methodology to generate training data is proposed to efficiently train the neural network for the development of a machine learning-based FAC prediction model. Consequently, it is concluded that machine learning can be used to construct pipe wall thinning rate prediction models and optimize the number of training datasets for training the machine learning algorithm. The proposed methodology can be applied to efficiently generate a large dataset from an FAC test to develop a wall thinning rate prediction model for a real situation.

자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정 (Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification)

  • 김영남
    • 대한상한금궤의학회지
    • /
    • 제14권1호
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

머신러닝을 이용한 에너지 선택적 유방촬영의 진단 정확도 향상에 관한 연구 (A Feasibility Study on the Improvement of Diagnostic Accuracy for Energy-selective Digital Mammography using Machine Learning)

  • 엄지수;이승완;김번영
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제42권1호
    • /
    • pp.9-17
    • /
    • 2019
  • Although digital mammography is a representative method for breast cancer detection. It has a limitation in detecting and classifying breast tumor due to superimposed structures. Machine learning, which is a part of artificial intelligence fields, is a method for analysing a large amount of data using complex algorithms, recognizing patterns and making prediction. In this study, we proposed a technique to improve the diagnostic accuracy of energy-selective mammography by training data using the machine learning algorithm and using dual-energy measurements. A dual-energy images obtained from a photon-counting detector were used for the input data of machine learning algorithms, and we analyzed the accuracy of predicted tumor thickness for verifying the machine learning algorithms. The results showed that the classification accuracy of tumor thickness was above 95% and was improved with an increase of imput data. Therefore, we expect that the diagnostic accuracy of energy-selective mammography can be improved by using machine learning.