• Title/Summary/Keyword: 딥 뉴럴 네트워크

Search Result 62, Processing Time 0.027 seconds

A Novel SOC Estimation Method for Multiple Number of Lithium Batteries Using Deep Neural Network (딥 뉴럴 네트워크를 이용한 새로운 리튬이온 배터리의 SOC 추정법)

  • Khan, Asad;Ko, Young-hwi;Choi, Woojin
    • Proceedings of the KIPE Conference
    • /
    • 2019.11a
    • /
    • pp.70-72
    • /
    • 2019
  • For the safe and reliable operation of Lithium-ion batteries in Electric Vehicles (EVs) or Energy Storage Systems (ESSs), it is essential to have accurate information of the battery such as State of Charge (SOC). Many kinds of different techniques to estimate the SOC of the batteries have been developed so far such as the Kalman Filter. However, when it is applied to the multiple number of batteries it is difficult to maintain the accuracy of the estimation over all cells due to the difference in parameter value of each cell. Moreover the difference in the parameter of each cell may become larger as the operation time accumulates due to aging. In this paper a novel Deep Neural Network (DNN) based SOC estimation method for multi cell application is proposed. In the proposed method DNN is implemented to learn non-linear relationship of the voltage and current of the lithium-ion battery at different SOCs and different temperatures. In the training the voltage and current data of the Lithium battery at charge and discharge cycles obtained at different temperatures are used. After the comprehensive training with the data obtained with a cell resulting estimation algorithm is applied to the other cells. The experimental results show that the Mean Absolute Error (MAE) of the estimation is 0.56% at 25℃, and 3.16% at 60℃ with the proposed SOC estimation algorithm.

  • PDF

A Novel SOC Estimation Method for Multiple Number of Lithium Batteries Using a Deep Neural Network (딥 뉴럴 네트워크를 이용한 새로운 리튬이온 배터리의 SOC 추정법)

  • Khan, Asad;Ko, Young-Hwi;Choi, Woo-Jin
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.26 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • For the safe and reliable operation of lithium-ion batteries in electric vehicles or energy storage systems, having accurate information of the battery, such as the state of charge (SOC), is essential. Many different techniques of battery SOC estimation have been developed, such as the Kalman filter. However, when this filter is applied to multiple batteries, it has difficulty maintaining the accuracy of the estimation over all cells owing to the difference in parameter values of each cell. The difference in the parameter of each cell may increase as the operation time accumulates due to aging. In this paper, a novel deep neural network (DNN)-based SOC estimation method for multi-cell application is proposed. In the proposed method, DNN is implemented to determine the nonlinear relationships of the voltage and current at different SOCs and temperatures. In the training, the voltage and current data obtained at different temperatures during charge/discharge cycles are used. After the comprehensive training with the data obtained from the cycle test with a cell, the resulting algorithm is applied to estimate the SOC of other cells. Experimental results show that the mean absolute error of the estimation is 1.213% at 25℃ with the proposed DNN-based SOC estimation method.

Low Power ADC Design for Mixed Signal Convolutional Neural Network Accelerator (혼성신호 컨볼루션 뉴럴 네트워크 가속기를 위한 저전력 ADC설계)

  • Lee, Jung Yeon;Asghar, Malik Summair;Arslan, Saad;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1627-1634
    • /
    • 2021
  • This paper introduces a low-power compact ADC circuit for analog Convolutional filter for low-power neural network accelerator SOC. While convolutional neural network accelerators can speed up the learning and inference process, they have drawback of consuming excessive power and occupying large chip area due to large number of multiply-and-accumulate operators when implemented in complex digital circuits. To overcome these drawbacks, we implemented an analog convolutional filter that consists of an analog multiply-and-accumulate arithmetic circuit along with an ADC. This paper is focused on the design optimization of a low-power 8bit SAR ADC for the analog convolutional filter accelerator We demonstrate how to minimize the capacitor-array DAC, an important component of SAR ADC, which is three times smaller than the conventional circuit. The proposed ADC has been fabricated in CMOS 65nm process. It achieves an overall size of 1355.7㎛2, power consumption of 2.6㎼ at a frequency of 100MHz, SNDR of 44.19 dB, and ENOB of 7.04bit.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Timely Sensor Fault Detection Scheme based on Deep Learning (딥 러닝 기반 실시간 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.163-169
    • /
    • 2020
  • Recently, research on automation and unmanned operation of machines in the industrial field has been conducted with the advent of AI, Big data, and the IoT, which are the core technologies of the Fourth Industrial Revolution. The machines for these automation processes are controlled based on the data collected from the sensors attached to them, and further, the processes are managed. Conventionally, the abnormalities of sensors are periodically checked and managed. However, due to various environmental factors and situations in the industrial field, there are cases where the inspection due to the failure is not missed or failures are not detected to prevent damage due to sensor failure. In addition, even if a failure occurs, it is not immediately detected, which worsens the process loss. Therefore, in order to prevent damage caused by such a sudden sensor failure, it is necessary to identify the failure of the sensor in an embedded system in real-time and to diagnose the failure and determine the type for a quick response. In this paper, a deep neural network-based fault diagnosis system is designed and implemented using Raspberry Pi to classify typical sensor fault types such as erratic fault, hard-over fault, spike fault, and stuck fault. In order to diagnose sensor failure, the network is constructed using Google's proposed Inverted residual block structure of MobilieNetV2. The proposed scheme reduces memory usage and improves the performance of the conventional CNN technique to classify sensor faults.

Evaluation of Classification and Accuracy in Chest X-ray Images using Deep Learning with Convolution Neural Network (컨볼루션 뉴럴 네트워크 기반의 딥러닝을 이용한 흉부 X-ray 영상의 분류 및 정확도 평가)

  • Song, Ho-Jun;Lee, Eun-Byeol;Jo, Heung-Joon;Park, Se-Young;Kim, So-Young;Kim, Hyeon-Jeong;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.1
    • /
    • pp.39-44
    • /
    • 2020
  • The purpose of this study was learning about chest X-ray image classification and accuracy research through Deep Learning using big data technology with Convolution Neural Network. Normal 1,583 and Pneumonia 4,289 were used in chest X-ray images. The data were classified as train (88.8%), validation (0.2%) and test (11%). Constructed as Convolution Layer, Max pooling layer size 2×2, Flatten layer, and Image Data Generator. The number of filters, filter size, drop out, epoch, batch size, and loss function values were set when the Convolution layer were 3 and 4 respectively. The test data verification results showed that the predicted accuracy was 94.67% when the number of filters was 64-128-128-128, filter size 3×3, drop out 0.25, epoch 5, batch size 15, and loss function RMSprop was 4. In this study, the classification of chest X-ray Normal and Pneumonia was predictable with high accuracy, and it is believed to be of great help not only to chest X-ray images but also to other medical images.

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

Exploring the Performance of Synthetic Minority Over-sampling Technique (SMOTE) to Predict Good Borrowers in P2P Lending (P2P 대부 우수 대출자 예측을 위한 합성 소수집단 오버샘플링 기법 성과에 관한 탐색적 연구)

  • Costello, Francis Joseph;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.9
    • /
    • pp.71-78
    • /
    • 2019
  • This study aims to identify good borrowers within the context of P2P lending. P2P lending is a growing platform that allows individuals to lend and borrow money from each other. Inherent in any loans is credit risk of borrowers and needs to be considered before any lending. Specifically in the context of P2P lending, traditional models fall short and thus this study aimed to rectify this as well as explore the problem of class imbalances seen within credit risk data sets. This study implemented an over-sampling technique known as Synthetic Minority Over-sampling Technique (SMOTE). To test our approach, we implemented five benchmarking classifiers such as support vector machines, logistic regression, k-nearest neighbor, random forest, and deep neural network. The data sample used was retrieved from the publicly available LendingClub dataset. The proposed SMOTE revealed significantly improved results in comparison with the benchmarking classifiers. These results should help actors engaged within P2P lending to make better informed decisions when selecting potential borrowers eliminating the higher risks present in P2P lending.

Study on Prediction of Similar Typhoons through Neural Network Optimization (뉴럴 네트워크의 최적화에 따른 유사태풍 예측에 관한 연구)

  • Kim, Yeon-Joong;Kim, Tae-Woo;Yoon, Jong-Sung;Kim, In-Ho
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.5
    • /
    • pp.427-434
    • /
    • 2019
  • Artificial intelligence (AI)-aided research currently enjoys active use in a wide array of fields thanks to the rapid development of computing capability and the use of Big Data. Until now, forecasting methods were primarily based on physics models and statistical studies. Today, AI is utilized in disaster prevention forecasts by studying the relationships between physical factors and their characteristics. Current studies also involve combining AI and physics models to supplement the strengths and weaknesses of each aspect. However, prior to these studies, an optimization algorithm for the AI model should be developed and its applicability should be studied. This study aimed to improve the forecast performance by constructing a model for neural network optimization. An artificial neural network (ANN) followed the ever-changing path of a typhoon to produce similar typhoon predictions, while the optimization achieved by the neural network algorithm was examined by evaluating the activation function, hidden layer composition, and dropouts. A learning and test dataset was constructed from the available digital data of one typhoon that affected Korea throughout the record period (1951-2018). As a result of neural network optimization, assessments showed a higher degree of forecast accuracy.

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.