• Title/Summary/Keyword: prediction algorithm

Search Result 2,757, Processing Time 0.03 seconds

LSTM Model-based Prediction of the Variations in Load Power Data from Industrial Manufacturing Machines

  • Rita, Rijayanti;Kyohong, Jin;Mintae, Hwang
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.295-302
    • /
    • 2022
  • This paper contains the development of a smart power device designed to collect load power data from industrial manufacturing machines, predict future variations in load power data, and detect abnormal data in advance by applying a machine learning-based prediction algorithm. The proposed load power data prediction model is implemented using a Long Short-Term Memory (LSTM) algorithm with high accuracy and relatively low complexity. The Flask and REST API are used to provide prediction results to users in a graphical interface. In addition, we present the results of experiments conducted to evaluate the performance of the proposed approach, which show that our model exhibited the highest accuracy compared with Multilayer Perceptron (MLP), Random Forest (RF), and Support Vector Machine (SVM) models. Moreover, we expect our method's accuracy could be improved by further optimizing the hyperparameter values and training the model for a longer period of time using a larger amount of data.

Motion Adaptive Lossless Image Compression Algorithm (움직임 적응적인 무손실 영상 압축 알고리즘)

  • Kim, Young-Ro;Park, Hyun-Sang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.4
    • /
    • pp.736-739
    • /
    • 2009
  • In this paper, an efficient lossless compression algorithm using motion adaptation is proposed. It is divided into two parts: a motion adaptation based nonlinear predictor part and a residual data coding part. The proposed nonlinear predictor can reduce prediction error by learning from its past prediction errors using motion adaption. The predictor decides the proper selection of the intra and inter prediction values according to the past prediction error. The reduced error is coded by existing context adaptive coding method. Experimental results show that the proposed algorithm has the higher compression ratio than context modeling methods, such as FELICS, CALIC, and JPEG-LS.

Prediction of bankruptcy data using machine learning techniques (기계학습 방법을 이용한 기업부도의 예측)

  • Park, Dong-Joon;Yun, Ye-Boon;Yoon, Min
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.3
    • /
    • pp.569-577
    • /
    • 2012
  • The analysis and management of business failure has been recognized to be important in the area of financial management in the evaluation of firms' performance and the assessment of their viability. To this end, effective failure-prediction models are needed. This paper describes a new approach to prediction of business failure using the total margin algorithm which is a kind of support vector machine. It will be shown that the proposed method can evaluate the risk of failure better than existing methods through some real data.

A Study on the UI Design Method for Monitoring AI-Based Demand Prediction Algorithm (AI 기반 수요예측알고리즘 모니터링 UI 디자인 방안 연구)

  • Im, So-Yeon;Lee, Hyo-won;Kim, seong-Ho;Lee, Seung-jun;Lee, Young-woo;Park, Cheol-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.447-449
    • /
    • 2022
  • This study was based on Android, one of the representative mobile platforms with the characteristics of connecting to the network anytime, anywhere and flexible mobility. In addition, using a demand prediction algorithm that can know the data of defective products based on AI, we will study the real-time monitoring UI design method based on Android studio with demand prediction data and company time series data.

  • PDF

AN IMPROVED ALGORITHM FOR RNA SECONDARY STRUCTURE PREDICTION

  • Namsrai Oyun-Erdene;Jung Kwang Su;Kim Sunshin;Ryu Keun Ho
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.280-282
    • /
    • 2005
  • A ribonucleic acid (RNA) is one of the two types of nucleic acids found in living organisms. An RNA molecule represents a long chain of monomers called nucleotides. The sequence of nucleotides of an RNA molecule constitutes its primary structure, and the pattern of pairing between nucleotides determines the secondary structure of an RNA. Non-coding RNA genes produce transcripts that exert their function without ever producing proteins. Predicting the secondary structure of non-coding RNAs is very important for understanding their functions. We focus on Nussinov's algorithm as useful techniques for predicting RNA secondary structures. We introduce a new traceback matrix and scoring table to improve above algorithm. And the improved algorithm provides better levels of performance than the originals.

  • PDF

A Study on Implementation of Evolving Cellular Automata Neural System (진화하는 셀룰라 오토마타 신경망의 하드웨어 구현에 관한 연구)

  • 반창봉;곽상영;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.255-258
    • /
    • 2001
  • This paper is implementation of cellular automata neural network system which is a living creatures' brain using evolving hardware concept. Cellular automata neural network system is based on the development and the evolution, in other words, it is modeled on the ontogeny and phylogeny of natural living things. The proposed system developes each cell's state in neural network by CA. And it regards code of CA rule as individual of genetic algorithm, and evolved by genetic algorithm. In this paper we implement this system using evolving hardware concept Evolving hardware is reconfigurable hardware whose configuration is under the control of an evolutionary algorithm. We design genetic algorithm process for evolutionary algorithm and cells in cellular automata neural network for the construction of reconfigurable system. The effectiveness of the proposed system is verified by applying it to time-series prediction.

  • PDF

HMM-based Adaptive Frequency-Hopping Cognitive Radio System to Reduce Interference Time and to Improve Throughput

  • Sohn, Sung-Hwan;Jang, Sung-Jeen;Kim, Jae-Moung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.475-490
    • /
    • 2010
  • Cognitive Radio is an advanced enabling technology for the efficient utilization of vacant spectrum due to its ability to sense the spectrum environment. It is important to determine accurate spectrum utilization of the primary system in a cognitive radio environment. In order to define the spectrum utilization state, many CR systems use what is known as the quiet period (QP) method. However, even when using a QP, interference can occur. This causes reduced system throughput and contrary to the basic condition of cognitive radio. In order to reduce the interference time, a frequency-hopping algorithm is proposed here. Additionally, to complement the loss of throughput in the FH, a HMM-based channel prediction algorithm and a channel allocation algorithm is proposed. Simulations were conducted while varying several parameters. The findings show that the proposed algorithm outperforms conventional channel allocation algorithms.

A neural network with adaptive learning algorithm of curvature smoothing for time-series prediction (시계열 예측을 위한 1, 2차 미분 감소 기능의 적응 학습 알고리즘을 갖는 신경회로망)

  • 정수영;이민호;이수영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.6
    • /
    • pp.71-78
    • /
    • 1997
  • In this paper, a new neural network training algorithm will be devised for function approximator with good generalization characteristics and tested with the time series prediction problem using santaFe competition data sets. To enhance the generalization ability a constraint term of hidden neuraon activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. A hybrid learning algorithm of the error-back propagation and Hebbian learning algorithm with weight decay constraint will be naturally developed by the steepest decent algorithm minimizing the proposed cost function without much increase of computational requriements.

  • PDF

LP-Based Blind Adaptive Channel Identification and Equalization with Phase Offset Compensation

  • Ahn, Kyung-Sseung;Baik, Heung-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.4C
    • /
    • pp.384-391
    • /
    • 2003
  • Blind channel identification and equalization attempt to identify the communication channel and to remove the inter-symbol interference caused by a communication channel without using any known trainning sequences. In this paper, we propose a blind adaptive channel identification and equalization algorithm with phase offset compensation for single-input multiple-output (SIMO) channel. It is based on the one-step forward multichannel linear prediction error method and can be implemented by an RLS algorithm. Phase offset problem, we use a blind adaptive algorithm called the constant modulus derotator (CMD) algorithm based on condtant modulus algorithm (CMA). Moreover, unlike many known subspace (SS) methods or cross relation (CR) methods, our proposed algorithms do not require channel order estimation. Therefore, our algorithms are robust to channel order mismatch.

Optimized Neural Network Weights and Biases Using Particle Swarm Optimization Algorithm for Prediction Applications

  • Ahmadzadeh, Ezat;Lee, Jieun;Moon, Inkyu
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1406-1420
    • /
    • 2017
  • Artificial neural networks (ANNs) play an important role in the fields of function approximation, prediction, and classification. ANN performance is critically dependent on the input parameters, including the number of neurons in each layer, and the optimal values of weights and biases assigned to each neuron. In this study, we apply the particle swarm optimization method, a popular optimization algorithm for determining the optimal values of weights and biases for every neuron in different layers of the ANN. Several regression models, including general linear regression, Fourier regression, smoothing spline, and polynomial regression, are conducted to evaluate the proposed method's prediction power compared to multiple linear regression (MLR) methods. In addition, residual analysis is conducted to evaluate the optimized ANN accuracy for both training and test datasets. The experimental results demonstrate that the proposed method can effectively determine optimal values for neuron weights and biases, and high accuracy results are obtained for prediction applications. Evaluations of the proposed method reveal that it can be used for prediction and estimation purposes, with a high accuracy ratio, and the designed model provides a reliable technique for optimization. The simulation results show that the optimized ANN exhibits superior performance to MLR for prediction purposes.