• Title/Summary/Keyword: Learning Control Algorithm

Search Result 958, Processing Time 0.028 seconds

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

Data anomaly detection and Data fusion based on Incremental Principal Component Analysis in Fog Computing

  • Yu, Xue-Yong;Guo, Xin-Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3989-4006
    • /
    • 2020
  • The intelligent agriculture monitoring is based on the perception and analysis of environmental data, which enables the monitoring of the production environment and the control of environmental regulation equipment. As the scale of the application continues to expand, a large amount of data will be generated from the perception layer and uploaded to the cloud service, which will bring challenges of insufficient bandwidth and processing capacity. A fog-based offline and real-time hybrid data analysis architecture was proposed in this paper, which combines offline and real-time analysis to enable real-time data processing on resource-constrained IoT devices. Furthermore, we propose a data process-ing algorithm based on the incremental principal component analysis, which can achieve data dimensionality reduction and update of principal components. We also introduce the concept of Squared Prediction Error (SPE) value and realize the abnormal detection of data through the combination of SPE value and data fusion algorithm. To ensure the accuracy and effectiveness of the algorithm, we design a regular-SPE hybrid model update strategy, which enables the principal component to be updated on demand when data anomalies are found. In addition, this strategy can significantly reduce resource consumption growth due to the data analysis architectures. Practical datasets-based simulations have confirmed that the proposed algorithm can perform data fusion and exception processing in real-time on resource-constrained devices; Our model update strategy can reduce the overall system resource consumption while ensuring the accuracy of the algorithm.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

Noise-tolerant Image Restoration with Similarity-learned Fuzzy Association Memory

  • Park, Choong Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.51-55
    • /
    • 2020
  • In this paper, an improved FAM is proposed by adopting similarity learning in the existing FAM (Fuzzy Associative Memory) used in image restoration. Image restoration refers to the recovery of the latent clean image from its noise-corrupted version. In serious application like face recognition, this process should be noise-tolerant, robust, fast, and scalable. The existing FAM is a simple single layered neural network that can be applied to this domain with its robust fuzzy control but has low capacity problem in real world applications. That similarity measure is implied to the connection strength of the FAM structure to minimize the root mean square error between the recovered and the original image. The efficacy of the proposed algorithm is verified with significant low error magnitude from random noise in our experiment.

Using Faster-R-CNN to Improve the Detection Efficiency of Workpiece Irregular Defects

  • Liu, Zhao;Li, Yan
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.625-627
    • /
    • 2022
  • In the construction and development of modern industrial production technology, the traditional technology management mode is faced with many problems such as low qualification rates and high application costs. In the research, an improved workpiece defect detection method based on deep learning is proposed, which can control the application cost and improve the detection efficiency of irregular defects. Based on the research of the current situation of deep learning applications, this paper uses the improved Faster R-CNN network structure model as the core detection algorithm to automatically locate and classify the defect areas of the workpiece. Firstly, the robustness of the model was improved by appropriately changing the depth and the number of channels of the backbone network, and the hyperparameters of the improved model were adjusted. Then the deformable convolution is added to improve the detection ability of irregular defects. The final experimental results show that this method's average detection accuracy (mAP) is 4.5% higher than that of other methods. The model with anchor size and aspect ratio (65,129,257,519) and (0.2,0.5,1,1) has the highest defect recognition rate, and the detection accuracy reaches 93.88%.

A study on Improving the Performance of Anti - Drone Systems using AI (인공지능(AI)을 활용한 드론방어체계 성능향상 방안에 관한 연구)

  • Hae Chul Ma;Jong Chan Moon;Jae Yong Park;Su Han Lee;Hyuk Jin Kwon
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.126-134
    • /
    • 2023
  • Drones are emerging as a new security threat, and the world is working to reduce them. Detection and identification are the most difficult and important parts of the anti-drone systems. Existing detection and identification methods each have their strengths and weaknesses, so complementary operations are required. Detection and identification performance in anti-drone systems can be improved through the use of artificial intelligence. This is because artificial intelligence can quickly analyze differences smaller than humans. There are three ways to utilize artificial intelligence. Through reinforcement learning-based physical control, noise and blur generated when the optical camera tracks the drone may be reduced, and tracking stability may be improved. The latest NeRF algorithm can be used to solve the problem of lack of enemy drone data. It is necessary to build a data network to utilize artificial intelligence. Through this, data can be efficiently collected and managed. In addition, model performance can be improved by regularly generating artificial intelligence learning data.

Recommendation Model for Battlefield Analysis based on Siamese Network

  • Geewon, Suh;Yukyung, Shin;Soyeon, Jin;Woosin, Lee;Jongchul, Ahn;Changho, Suh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • In this paper, we propose a training method of a recommendation learning model that analyzes the battlefield situation and recommends a suitable hypothesis for the current situation. The proposed learning model uses the preference determined by comparing the two hypotheses as a label data to learn which hypothesis best analyzes the current battlefield situation. Our model is based on Siamese neural network architecture which uses the same weights on two different input vectors. The model takes two hypotheses as an input, and learns the priority between two hypotheses while sharing the same weights in the twin network. In addition, a score is given to each hypothesis through the proposed post-processing ranking algorithm, and hypotheses with a high score can be recommended to the commander in charge.

Development of Machine Learning-Based Platform for Distillation Column (증류탑을 위한 머신러닝 기반 플랫폼 개발)

  • Oh, Kwang Cheol;Kwon, Hyukwon;Roh, Jiwon;Choi, Yeongryeol;Park, Hyundo;Cho, Hyungtae;Kim, Junghwan
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.565-572
    • /
    • 2020
  • This study developed a software platform using machine learning of artificial intelligence to optimize the distillation column system. The distillation column is representative and core process in the petrochemical industry. Process stabilization is difficult due to various operating conditions and continuous process characteristics, and differences in process efficiency occur depending on operator skill. The process control based on the theoretical simulation was used to overcome this problem, but it has a limitation which it can't apply to complex processes and real-time systems. This study aims to develop an empirical simulation model based on machine learning and to suggest an optimal process operation method. The development of empirical simulations involves collecting big data from the actual process, feature extraction through data mining, and representative algorithm for the chemical process. Finally, the platform for the distillation column was developed with verification through a developed model and field tests. Through the developed platform, it is possible to predict the operating parameters and provided optimal operating conditions to achieve efficient process control. This study is the basic study applying the artificial intelligence machine learning technique for the chemical process. After application on a wide variety of processes and it can be utilized to the cornerstone of the smart factory of the industry 4.0.

Efficient Hybrid Transactional Memory Scheme using Near-optimal Retry Computation and Sophisticated Memory Management in Multi-core Environment

  • Jang, Yeon-Woo;Kang, Moon-Hwan;Chang, Jae-Woo
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.499-509
    • /
    • 2018
  • Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multi-core environment. In this paper, we propose an efficient hybrid transactional memory scheme using near-optimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.

Stability Analysis and Effect of CES on ANN Based AGC for Frequency Excursion

  • Raja, J.;Rajan, C.Christober Asir
    • Journal of Electrical Engineering and Technology
    • /
    • v.5 no.4
    • /
    • pp.552-560
    • /
    • 2010
  • This paper presents an application of layered Artificial Neural Network controller to study load frequency control problem in power system. The objective of control scheme guarantees that steady state error of frequencies and inadvertent interchange of tie-lines are maintained in a given tolerance limitation. The proposed controller has been designed for a two-area interconnected power system. Only one artificial neural network controller (ANN), which controls the inputs of each area in the power system together, is considered. In this study, back propagation-through time algorithm is used as neural network learning rule. The performance of the power system is simulated by using conventional integral controller and ANN controller, separately. For the first time comparative study has been carried out between SMES and CES unit, all of the areas are included with SMES and CES unit separately. By comparing the results for both cases, the performance of ANN controller with CES unit is found to be better than conventional controllers with SMES, CES and ANN with SMES.