• Title/Summary/Keyword: Learning Control Algorithm

Search Result 949, Processing Time 0.031 seconds

Development and Validation of a Machine Learning-based Differential Diagnosis Model for Patients with Mild Cognitive Impairment using Resting-State Quantitative EEG (안정 상태에서의 정량 뇌파를 이용한 기계학습 기반의 경도인지장애 환자의 감별 진단 모델 개발 및 검증)

  • Moon, Kiwook;Lim, Seungeui;Kim, Jinuk;Ha, Sang-Won;Lee, Kiwon
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.185-192
    • /
    • 2022
  • Early detection of mild cognitive impairment can help prevent the progression of dementia. The purpose of this study was to design and validate a machine learning model that automatically differential diagnosed patients with mild cognitive impairment and identified cognitive decline characteristics compared to a control group with normal cognition using resting-state quantitative electroencephalogram (qEEG) with eyes closed. In the first step, a rectified signal was obtained through a preprocessing process that receives a quantitative EEG signal as an input and removes noise through a filter and independent component analysis (ICA). Frequency analysis and non-linear features were extracted from the rectified signal, and the 3067 extracted features were used as input of a linear support vector machine (SVM), a representative algorithm among machine learning algorithms, and classified into mild cognitive impairment patients and normal cognitive adults. As a result of classification analysis of 58 normal cognitive group and 80 patients in mild cognitive impairment, the accuracy of SVM was 86.2%. In patients with mild cognitive impairment, alpha band power was decreased in the frontal lobe, and high beta band power was increased in the frontal lobe compared to the normal cognitive group. Also, the gamma band power of the occipital-parietal lobe was decreased in mild cognitive impairment. These results represented that quantitative EEG can be used as a meaningful biomarker to discriminate cognitive decline.

Prediction of Stunting Among Under-5 Children in Rwanda Using Machine Learning Techniques

  • Similien Ndagijimana;Ignace Habimana Kabano;Emmanuel Masabo;Jean Marie Ntaganda
    • Journal of Preventive Medicine and Public Health
    • /
    • v.56 no.1
    • /
    • pp.41-49
    • /
    • 2023
  • Objectives: Rwanda reported a stunting rate of 33% in 2020, decreasing from 38% in 2015; however, stunting remains an issue. Globally, child deaths from malnutrition stand at 45%. The best options for the early detection and treatment of stunting should be made a community policy priority, and health services remain an issue. Hence, this research aimed to develop a model for predicting stunting in Rwandan children. Methods: The Rwanda Demographic and Health Survey 2019-2020 was used as secondary data. Stratified 10-fold cross-validation was used, and different machine learning classifiers were trained to predict stunting status. The prediction models were compared using different metrics, and the best model was chosen. Results: The best model was developed with the gradient boosting classifier algorithm, with a training accuracy of 80.49% based on the performance indicators of several models. Based on a confusion matrix, the test accuracy, sensitivity, specificity, and F1 were calculated, yielding the model's ability to classify stunting cases correctly at 79.33%, identify stunted children accurately at 72.51%, and categorize non-stunted children correctly at 94.49%, with an area under the curve of 0.89. The model found that the mother's height, television, the child's age, province, mother's education, birth weight, and childbirth size were the most important predictors of stunting status. Conclusions: Therefore, machine-learning techniques may be used in Rwanda to construct an accurate model that can detect the early stages of stunting and offer the best predictive attributes to help prevent and control stunting in under five Rwandan children.

Study on the Failure Diagnosis of Robot Joints Using Machine Learning (기계학습을 이용한 로봇 관절부 고장진단에 대한 연구)

  • Mi Jin Kim;Kyo Mun Ku;Jae Hong Shim;Hyo Young Kim;Kihyun Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.113-118
    • /
    • 2023
  • Maintenance of semiconductor equipment processes is crucial for the continuous growth of the semiconductor market. The process must always be upheld in optimal condition to ensure a smooth supply of numerous parts. Additionally, it is imperative to monitor the status of the robots that play a central role in the process. Just as many senses of organs judge a person's body condition, robots also have numerous sensors that play a role, and like human joints, they can detect the condition first in the joints, which are the driving parts of the robot. Therefore, a normal state test bed and an abnormal state test bed using an aging reducer were constructed by simulating the joint, which is the driving part of the robot. Various sensors such as vibration, torque, encoder, and temperature were attached to accurately diagnose the robot's failure, and the test bed was built with an integrated system to collect and control data simultaneously in real-time. After configuring the user screen and building a database based on the collected data, the characteristic values of normal and abnormal data were analyzed, and machine learning was performed using the KNN (K-Nearest Neighbors) machine learning algorithm. This approach yielded an impressive 94% accuracy in failure diagnosis, underscoring the reliability of both the test bed and the data it produced.

  • PDF

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

Data anomaly detection and Data fusion based on Incremental Principal Component Analysis in Fog Computing

  • Yu, Xue-Yong;Guo, Xin-Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3989-4006
    • /
    • 2020
  • The intelligent agriculture monitoring is based on the perception and analysis of environmental data, which enables the monitoring of the production environment and the control of environmental regulation equipment. As the scale of the application continues to expand, a large amount of data will be generated from the perception layer and uploaded to the cloud service, which will bring challenges of insufficient bandwidth and processing capacity. A fog-based offline and real-time hybrid data analysis architecture was proposed in this paper, which combines offline and real-time analysis to enable real-time data processing on resource-constrained IoT devices. Furthermore, we propose a data process-ing algorithm based on the incremental principal component analysis, which can achieve data dimensionality reduction and update of principal components. We also introduce the concept of Squared Prediction Error (SPE) value and realize the abnormal detection of data through the combination of SPE value and data fusion algorithm. To ensure the accuracy and effectiveness of the algorithm, we design a regular-SPE hybrid model update strategy, which enables the principal component to be updated on demand when data anomalies are found. In addition, this strategy can significantly reduce resource consumption growth due to the data analysis architectures. Practical datasets-based simulations have confirmed that the proposed algorithm can perform data fusion and exception processing in real-time on resource-constrained devices; Our model update strategy can reduce the overall system resource consumption while ensuring the accuracy of the algorithm.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

Noise-tolerant Image Restoration with Similarity-learned Fuzzy Association Memory

  • Park, Choong Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.51-55
    • /
    • 2020
  • In this paper, an improved FAM is proposed by adopting similarity learning in the existing FAM (Fuzzy Associative Memory) used in image restoration. Image restoration refers to the recovery of the latent clean image from its noise-corrupted version. In serious application like face recognition, this process should be noise-tolerant, robust, fast, and scalable. The existing FAM is a simple single layered neural network that can be applied to this domain with its robust fuzzy control but has low capacity problem in real world applications. That similarity measure is implied to the connection strength of the FAM structure to minimize the root mean square error between the recovered and the original image. The efficacy of the proposed algorithm is verified with significant low error magnitude from random noise in our experiment.

Using Faster-R-CNN to Improve the Detection Efficiency of Workpiece Irregular Defects

  • Liu, Zhao;Li, Yan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.625-627
    • /
    • 2022
  • In the construction and development of modern industrial production technology, the traditional technology management mode is faced with many problems such as low qualification rates and high application costs. In the research, an improved workpiece defect detection method based on deep learning is proposed, which can control the application cost and improve the detection efficiency of irregular defects. Based on the research of the current situation of deep learning applications, this paper uses the improved Faster R-CNN network structure model as the core detection algorithm to automatically locate and classify the defect areas of the workpiece. Firstly, the robustness of the model was improved by appropriately changing the depth and the number of channels of the backbone network, and the hyperparameters of the improved model were adjusted. Then the deformable convolution is added to improve the detection ability of irregular defects. The final experimental results show that this method's average detection accuracy (mAP) is 4.5% higher than that of other methods. The model with anchor size and aspect ratio (65,129,257,519) and (0.2,0.5,1,1) has the highest defect recognition rate, and the detection accuracy reaches 93.88%.

A study on Improving the Performance of Anti - Drone Systems using AI (인공지능(AI)을 활용한 드론방어체계 성능향상 방안에 관한 연구)

  • Hae Chul Ma;Jong Chan Moon;Jae Yong Park;Su Han Lee;Hyuk Jin Kwon
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.126-134
    • /
    • 2023
  • Drones are emerging as a new security threat, and the world is working to reduce them. Detection and identification are the most difficult and important parts of the anti-drone systems. Existing detection and identification methods each have their strengths and weaknesses, so complementary operations are required. Detection and identification performance in anti-drone systems can be improved through the use of artificial intelligence. This is because artificial intelligence can quickly analyze differences smaller than humans. There are three ways to utilize artificial intelligence. Through reinforcement learning-based physical control, noise and blur generated when the optical camera tracks the drone may be reduced, and tracking stability may be improved. The latest NeRF algorithm can be used to solve the problem of lack of enemy drone data. It is necessary to build a data network to utilize artificial intelligence. Through this, data can be efficiently collected and managed. In addition, model performance can be improved by regularly generating artificial intelligence learning data.

Recommendation Model for Battlefield Analysis based on Siamese Network

  • Geewon, Suh;Yukyung, Shin;Soyeon, Jin;Woosin, Lee;Jongchul, Ahn;Changho, Suh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • In this paper, we propose a training method of a recommendation learning model that analyzes the battlefield situation and recommends a suitable hypothesis for the current situation. The proposed learning model uses the preference determined by comparing the two hypotheses as a label data to learn which hypothesis best analyzes the current battlefield situation. Our model is based on Siamese neural network architecture which uses the same weights on two different input vectors. The model takes two hypotheses as an input, and learns the priority between two hypotheses while sharing the same weights in the twin network. In addition, a score is given to each hypothesis through the proposed post-processing ranking algorithm, and hypotheses with a high score can be recommended to the commander in charge.