• Title/Summary/Keyword: learning algorithms

Search Result 2,280, Processing Time 0.032 seconds

Incorporating Machine Learning into a Data Warehouse for Real-Time Construction Projects Benchmarking

  • Yin, Zhe;DeGezelle, Deborah;Hirota, Kazuma;Choi, Jiyong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.831-838
    • /
    • 2022
  • Machine Learning is a process of using computer algorithms to extract information from raw data to solve complex problems in a data-rich environment. It has been used in the construction industry by both academics and practitioners for multiple applications to improve the construction process. The Construction Industry Institute, a leading construction research organization has twenty-five years of experience in benchmarking capital projects in the industry. The organization is at an advantage to develop useful machine learning applications because it possesses enormous real construction data. Its benchmarking programs have been actively used by owner and contractor companies today to assess their capital projects' performance. A credible benchmarking program requires statistically valid data without subjective interference in the program administration. In developing the next-generation benchmarking program, the Data Warehouse, the organization aims to use machine learning algorithms to minimize human effort and to enable rapid data ingestion from diverse sources with data validity and reliability. This research effort uses a focus group comprised of practitioners from the construction industry and data scientists from a variety of disciplines. The group collaborated to identify the machine learning requirements and potential applications in the program. Technical and domain experts worked to select appropriate algorithms to support the business objectives. This paper presents initial steps in a chain of what is expected to be numerous learning algorithms to support high-performance computing, a fully automated performance benchmarking system.

  • PDF

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

BEGINNER'S GUIDE TO NEURAL NETWORKS FOR THE MNIST DATASET USING MATLAB

  • Kim, Bitna;Park, Young Ho
    • Korean Journal of Mathematics
    • /
    • v.26 no.2
    • /
    • pp.337-348
    • /
    • 2018
  • MNIST dataset is a database containing images of handwritten digits, with each image labeled by an integer from 0 to 9. It is used to benchmark the performance of machine learning algorithms. Neural networks for MNIST are regarded as the starting point of the studying machine learning algorithms. However it is not easy to start the actual programming. In this expository article, we will give a step-by-step instruction to build neural networks for MNIST dataset using MATLAB.

ON LEARNING OF CNAC FOR MANIPULATOR CONTROL

  • Hwang, Heon;Choi, Dong-Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.653-662
    • /
    • 1989
  • Cerebellar Model Arithmetic Controller (CMAC) has been introduced as an adaptive control function generator. CMAC computes control functions referring to a distributed memory table storing functional values rather than by solving equations analytically or numerically. CMAC has a unique mapping structure as a coarse coding and supervisory delta-rule learning property. In this paper, learning aspects and a convergence of the CMAC were investigated. The efficient training algorithms were developed to overcome the limitations caused by the conventional maximum error correction training and to eliminate the accumulated learning error caused by a sequential node training. A nonlinear function generator and a motion generator for a two d.o.f. manipulator were simulated. The efficiency of the various learning algorithms was demonstrated through the cpu time used and the convergence of the rms and maximum errors accumulated during a learning process. A generalization property and a learning effect due to the various gains were simulated. A uniform quantizing method was applied to cope with various ranges of input variables efficiently.

  • PDF

ON LEARNING OF CMAC FOR MANIPULATOR CONTROL

  • Choe, Dong-Yeop;Hwang, Hyeon
    • 한국기계연구소 소보
    • /
    • s.19
    • /
    • pp.93-115
    • /
    • 1989
  • Cerebellar Model Arithmetic Controller(CMAC) has been introduced as an adaptive control function generator. CMAC computes control functions referring to a distributed memory table storing functional values rather than by solving equations analytically or numerically. CMAC has a unique mapping structure as a coarse coding and supervisory delta-rule learning property. In this paper, learning aspects and a convergence of the CMAC were investigated. The efficient training algorithms were developed to overcome the limitations caused by the conventional maximum error correction training and to eliminate the accumulated learning error caused by a sequential node training. A nonlinear function generator and a motion generator for a two d. o. f. manipulator were simulated. The efficiency of the various learning algorithms was demonstrated through the cpu time used and the convergence of the rms and maximum errors accumulated during a learning process; A generalization property and a learning effect due to the various gains were simulated. A uniform quantizing method was applied to cope with various ranges of input variables efficiently.

  • PDF

Behavior leaning and evolution of collective autonomous mobile robots using reinforcement learning and distributed genetic algorithms (강화학습과 분산유전알고리즘을 이용한 자율이동로봇군의 행동학습 및 진화)

  • 이동욱;심귀보
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.8
    • /
    • pp.56-64
    • /
    • 1997
  • In distributed autonomous robotic systems, each robot must behaves by itself according to the its states and environements, and if necessary, must cooperates with other orbots in order to carray out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforement learning having delayed reward ability and distributed genectic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the perfodrmance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper, we verify the effectiveness of the proposed method by applying it to cooperative search problem.

  • PDF

A TabNet - Based System for Water Quality Prediction in Aquaculture

  • Nguyen, Trong–Nghia;Kim, Soo Hyung;Do, Nhu-Tai;Hong, Thai-Thi Ngoc;Yang, Hyung Jeong;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.39-52
    • /
    • 2022
  • In the context of the evolution of automation and intelligence, deep learning and machine learning algorithms have been widely applied in aquaculture in recent years, providing new opportunities for the digital realization of aquaculture. Especially, water quality management deserves attention thanks to its importance to food organisms. In this study, we proposed an end-to-end deep learning-based TabNet model for water quality prediction. From major indexes of water quality assessment, we applied novel deep learning techniques and machine learning algorithms in innovative fish aquaculture to predict the number of water cells counting. Furthermore, the application of deep learning in aquaculture is outlined, and the obtained results are analyzed. The experiment on in-house data showed an optimistic impact on the application of artificial intelligence in aquaculture, helping to reduce costs and time and increase efficiency in the farming process.

A Comparison of Meta-learning and Transfer-learning for Few-shot Jamming Signal Classification

  • Jin, Mi-Hyun;Koo, Ddeo-Ol-Ra;Kim, Kang-Suk
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.3
    • /
    • pp.163-172
    • /
    • 2022
  • Typical anti-jamming technologies based on array antennas, Space Time Adaptive Process (STAP) & Space Frequency Adaptive Process (SFAP), are very effective algorithms to perform nulling and beamforming. However, it does not perform equally well for all types of jamming signals. If the anti-jamming algorithm is not optimized for each signal type, anti-jamming performance deteriorates and the operation stability of the system become worse by unnecessary computation. Therefore, jamming classification technique is required to obtain optimal anti-jamming performance. Machine learning, which has recently been in the spotlight, can be considered to classify jamming signal. In general, performing supervised learning for classification requires a huge amount of data and new learning for unfamiliar signal. In the case of jamming signal classification, it is difficult to obtain large amount of data because outdoor jamming signal reception environment is difficult to configure and the signal type of attacker is unknown. Therefore, this paper proposes few-shot jamming signal classification technique using meta-learning and transfer-learning to train the model using a small amount of data. A training dataset is constructed by anti-jamming algorithm input data within the GNSS receiver when jamming signals are applied. For meta-learning, Model-Agnostic Meta-Learning (MAML) algorithm with a general Convolution Neural Networks (CNN) model is used, and the same CNN model is used for transfer-learning. They are trained through episodic training using training datasets on developed our Python-based simulator. The results show both algorithms can be trained with less data and immediately respond to new signal types. Also, the performances of two algorithms are compared to determine which algorithm is more suitable for classifying jamming signals.

Visual simulator for supporting to learn efficiently on dynamic programming (동적 프로그래밍에 대한 효율적인 학습을 지원하는 시각화 시뮬레이터)

  • Jung, Soon-Young;Kwon, Han-Sook
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.4
    • /
    • pp.23-36
    • /
    • 2008
  • It's known by recent surveys that many students have difficulty in understanding the concepts of programming algorithms, and don't feel interested in learning them. Dynamic programming, one of the most important and widely-used algorithms in computer science, is especially feared by students and unlike other algorithms, it also requires understanding of the process of problem solving and storage space design as well as basic principles of the algorithm. And so it has not been properly covered in classes. In this paper, we developed a visual simulator to solve the above problems in learning dynamic programming. This learning simulator is designed for students to run the algorithms themselves and learn how it works by visualizing each step of dynamic programming and corresponding states of storage space.

  • PDF

Development of benthic macroinvertebrate species distribution models using the Bayesian optimization (베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발)

  • Go, ByeongGeon;Shin, Jihoon;Cha, Yoonkyung
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.4
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.