• Title/Summary/Keyword: Deep Learning System

Search Result 1,745, Processing Time 0.028 seconds

Deep Learning Based Short-Term Electric Load Forecasting Models using One-Hot Encoding (원-핫 인코딩을 이용한 딥러닝 단기 전력수요 예측모델)

  • Kim, Kwang Ho;Chang, Byunghoon;Choi, Hwang Kyu
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.852-857
    • /
    • 2019
  • In order to manage the demand resources of project participants and to provide appropriate strategies in the virtual power plant's power trading platform for consumers or operators who want to participate in the distributed resource collective trading market, it is very important to forecast the next day's demand of individual participants and the overall system's electricity demand. This paper developed a power demand forecasting model for the next day. For the model, we used LSTM algorithm of deep learning technique in consideration of time series characteristics of power demand forecasting data, and new scheme is applied by applying one-hot encoding method to input/output values such as power demand. In the performance evaluation for comparing the general DNN with our LSTM forecasting model, both model showed 4.50 and 1.89 of root mean square error, respectively, and our LSTM model showed high prediction accuracy.

An RNN-based Fault Detection Scheme for Digital Sensor (RNN 기반 디지털 센서의 Rising time과 Falling time 고장 검출 기법)

  • Lee, Gyu-Hyung;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.29-35
    • /
    • 2019
  • As the fourth industrial revolution is emerging, many companies are increasingly interested in smart factories and the importance of sensors is being emphasized. In the case that sensors for collecting sensing data fail, the plant could not be optimized and further it could not be operated properly, which may incur a financial loss. For this purpose, it is necessary to diagnose the status of sensors to prevent sensor' fault. In the paper, we propose a scheme to diagnose digital-sensor' fault by analyzing the rising time and falling time of digital sensors through the LSTM(Long Short Term Memory) of Deep Learning RNN algorithm. Experimental results of the proposed scheme are compared with those of rule-based fault diagnosis algorithm in terms of AUC(Area Under the Curve) of accuracy and ROC(Receiver Operating Characteristic) curve. Experimental results show that the proposed system has better and more stable performance than the rule-based fault diagnosis algorithm.

Quality grading of Hanwoo (Korean native cattle breed) sub-images using convolutional neural network

  • Kwon, Kyung-Do;Lee, Ahyeong;Lim, Jongkuk;Cho, Soohyun;Lee, Wanghee;Cho, Byoung-Kwan;Seo, Youngwook
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1109-1122
    • /
    • 2020
  • The aim of this study was to develop a marbling classification and prediction model using small parts of sirloin images based on a deep learning algorithm, namely, a convolutional neural network (CNN). Samples were purchased from a commercial slaughterhouse in Korea, images for each grade were acquired, and the total images (n = 500) were assigned according to their grade number: 1++, 1+, 1, and both 2 & 3. The image acquisition system consists of a DSLR camera with a polarization filter to remove diffusive reflectance and two light sources (55 W). To correct the distorted original images, a radial correction algorithm was implemented. Color images of sirloins of Hanwoo (mixed with feeder cattle, steer, and calf) were divided and sub-images with image sizes of 161 × 161 were made to train the marbling prediction model. In this study, the convolutional neural network (CNN) has four convolution layers and yields prediction results in accordance with marbling grades (1++, 1+, 1, and 2&3). Every single layer uses a rectified linear unit (ReLU) function as an activation function and max-pooling is used for extracting the edge between fat and muscle and reducing the variance of the data. Prediction accuracy was measured using an accuracy and kappa coefficient from a confusion matrix. We summed the prediction of sub-images and determined the total average prediction accuracy. Training accuracy was 100% and the test accuracy was 86%, indicating comparably good performance using the CNN. This study provides classification potential for predicting the marbling grade using color images and a convolutional neural network algorithm.

CNN-based Building Recognition Method Robust to Image Noises (이미지 잡음에 강인한 CNN 기반 건물 인식 방법)

  • Lee, Hyo-Chan;Park, In-hag;Im, Tae-ho;Moon, Dai-Tchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.341-348
    • /
    • 2020
  • The ability to extract useful information from an image, such as the human eye, is an interface technology essential for AI computer implementation. The building recognition technology has a lower recognition rate than other image recognition technologies due to the various building shapes, the ambient noise images according to the season, and the distortion by angle and distance. The computer vision based building recognition algorithms presented so far has limitations in discernment and expandability due to manual definition of building characteristics. This paper introduces the deep learning CNN (Convolutional Neural Network) model, and proposes new method to improve the recognition rate even by changes of building images caused by season, illumination, angle and perspective. This paper introduces the partial images that characterize the building, such as windows or wall images, and executes the training with whole building images. Experimental results show that the building recognition rate is improved by about 14% compared to the general CNN model.

Traffic Speed Prediction Based on Graph Neural Networks for Intelligent Transportation System (지능형 교통 시스템을 위한 Graph Neural Networks 기반 교통 속도 예측)

  • Kim, Sunghoon;Park, Jonghyuk;Choi, Yerim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.1
    • /
    • pp.70-85
    • /
    • 2021
  • Deep learning methodology, which has been actively studied in recent years, has improved the performance of artificial intelligence. Accordingly, systems utilizing deep learning have been proposed in various industries. In traffic systems, spatio-temporal graph modeling using GNN was found to be effective in predicting traffic speed. Still, it has a disadvantage that the model is trained inefficiently due to the memory bottleneck. Therefore, in this study, the road network is clustered through the graph clustering algorithm to reduce memory bottlenecks and simultaneously achieve superior performance. In order to verify the proposed method, the similarity of road speed distribution was measured using Jensen-Shannon divergence based on the analysis result of Incheon UTIC data. Then, the road network was clustered by spectrum clustering based on the measured similarity. As a result of the experiments, it was found that when the road network was divided into seven networks, the memory bottleneck was alleviated while recording the best performance compared to the baselines with MAE of 5.52km/h.

Development of an Improved Geometric Path Tracking Algorithm with Real Time Image Processing Methods (실시간 이미지 처리 방법을 이용한 개선된 차선 인식 경로 추종 알고리즘 개발)

  • Seo, Eunbin;Lee, Seunggi;Yeo, Hoyeong;Shin, Gwanjun;Choi, Gyeungho;Lim, Yongseob
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.2
    • /
    • pp.35-41
    • /
    • 2021
  • In this study, improved path tracking control algorithm based on pure pursuit algorithm is newly proposed by using improved lane detection algorithm through real time post-processing with interpolation methodology. Since the original pure pursuit works well only at speeds below 20 km/h, the look-ahead distance is implemented as a sigmoid function to work well at an average speed of 45 km/h to improve tracking performance. In addition, a smoothing filter was added to reduce the steering angle vibration of the original algorithm, and the stability of the steering angle was improved. The post-processing algorithm presented has implemented more robust lane recognition system using real-time pre/post processing method with deep learning and estimated interpolation. Real time processing is more cost-effective than the method using lots of computing resources and building abundant datasets for improving the performance of deep learning networks. Therefore, this paper also presents improved lane detection performance by using the final results with naive computer vision codes and pre/post processing. Firstly, the pre-processing was newly designed for real-time processing and robust recognition performance of augmentation. Secondly, the post-processing was designed to detect lanes by receiving the segmentation results based on the estimated interpolation in consideration of the properties of the continuous lanes. Consequently, experimental results by utilizing driving guidance line information from processing parts show that the improved lane detection algorithm is effective to minimize the lateral offset error in the diverse maneuvering roads.

Optimization of the Kernel Size in CNN Noise Attenuator (CNN 잡음 감쇠기에서 커널 사이즈의 최적화)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.987-994
    • /
    • 2020
  • In this paper, we studied the effect of kernel size of CNN layer on performance in acoustic noise attenuators. This system uses a deep learning algorithm using a neural network adaptive prediction filter instead of using the existing adaptive filter. Speech is estimated from a single input speech signal containing noise using a 100-neuron, 16-filter CNN filter and an error back propagation algorithm. This is to use the quasi-periodic property in the voiced sound section of the voice signal. In this study, a simulation program using Tensorflow and Keras libraries was written and a simulation was performed to verify the performance of the noise attenuator for the kernel size. As a result of the simulation, when the kernel size is about 16, the MSE and MAE values are the smallest, and when the size is smaller or larger than 16, the MSE and MAE values increase. It can be seen that in the case of an speech signal, the features can be best captured when the kernel size is about 16.

Mapless Navigation Based on DQN Considering Moving Obstacles, and Training Time Reduction Algorithm (이동 장애물을 고려한 DQN 기반의 Mapless Navigation 및 학습 시간 단축 알고리즘)

  • Yoon, Beomjin;Yoo, Seungryeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.377-383
    • /
    • 2021
  • Recently, in accordance with the 4th industrial revolution, The use of autonomous mobile robots for flexible logistics transfer is increasing in factories, the warehouses and the service areas, etc. In large factories, many manual work is required to use Simultaneous Localization and Mapping(SLAM), so the need for the improved mobile robot autonomous driving is emerging. Accordingly, in this paper, an algorithm for mapless navigation that travels in an optimal path avoiding fixed or moving obstacles is proposed. For mapless navigation, the robot is trained to avoid fixed or moving obstacles through Deep Q Network (DQN) and accuracy 90% and 93% are obtained for two types of obstacle avoidance, respectively. In addition, DQN requires a lot of learning time to meet the required performance before use. To shorten this, the target size change algorithm is proposed and confirmed the reduced learning time and performance of obstacle avoidance through simulation.

A study on the Generation Method of Aircraft Wing Flexure Data Using Generative Adversarial Networks (생성적 적대 신경망을 이용한 항공기 날개 플렉셔 데이터 생성 방안에 관한 연구)

  • Ryu, Kyung-Don
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.3
    • /
    • pp.179-184
    • /
    • 2022
  • The accurate wing flexure model is required to improve the transfer alignment performance of guided weapon system mounted on a wing of fighter aircraft or armed helicopter. In order to solve this problem, mechanical or stochastical modeling methods have been studying, but modeling accuracy is too low to be applied to weapon systems. The deep learning techniques that have been studying recently are suitable for nonlinear. However, operating fighter aircraft for deep-learning modeling to secure a large amount of data is practically difficult. In this paper, it was used to generate amount of flexure data samples that are similar to the actual flexure data. And it was confirmed that generated data is similar to the actual data by utilizing "measures of similarity" which measures how much alike the two data objects are.

Development of AI Detection Model based on CCTV Image for Underground Utility Tunnel (지하공동구의 CCTV 영상 기반 AI 연기 감지 모델 개발)

  • Kim, Jeongsoo;Park, Sangmi;Hong, Changhee;Park, Seunghwa;Lee, Jaewook
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.2
    • /
    • pp.364-373
    • /
    • 2022
  • Purpose: The purpose of this paper is to develope smoke detection using AI model for detecting the initial fire in underground utility tunnels using CCTV Method: To improve detection performance of smoke which is high irregular, a deep learning model for fire detection was trained to optimize smoke detection. Also, several approaches such as dataset cleansing and gradient exploding release were applied to enhance model, and compared with results of those. Result: Results show the proposed approaches can improve the model performance, and the final model has good prediction capability according to several indexes such as mAP. However, the final model has low false negative but high false positive capacities. Conclusion: The present model can apply to smoke detection in underground utility tunnel, fixing the defect by linking between the model and the utility tunnel control system.