• Title/Summary/Keyword: Tensorflow

Search Result 116, Processing Time 0.028 seconds

Path selection algorithm for multi-path system based on deep Q learning (Deep Q 학습 기반의 다중경로 시스템 경로 선택 알고리즘)

  • Chung, Byung Chang;Park, Heasook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.50-55
    • /
    • 2021
  • Multi-path system is a system in which utilizes various networks simultaneously. It is expected that multi-path system can enhance communication speed, reliability, security of network. In this paper, we focus on path selection in multi-path system. To select optimal path, we propose deep reinforcement learning algorithm which is rewarded by the round-trip-time (RTT) of each networks. Unlike multi-armed bandit model, deep Q learning is applied to consider rapidly changing situations. Due to the delay of RTT data, we also suggest compensation algorithm of the delayed reward. Moreover, we implement testbed learning server to evaluate the performance of proposed algorithm. The learning server contains distributed database and tensorflow module to efficiently operate deep learning algorithm. By means of simulation, we showed that the proposed algorithm has better performance than lowest RTT about 20%.

Quantitative Analysis for Win/Loss Prediction of 'League of Legends' Utilizing the Deep Neural Network System through Big Data

  • No, Si-Jae;Moon, Yoo-Jin;Hwang, Young-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.213-221
    • /
    • 2021
  • In this paper, we suggest the Deep Neural Network Model System for predicting results of the match of 'League of Legends (LOL).' The model utilized approximately 26,000 matches of the LOL game and Keras of Tensorflow. It performed an accuracy of 93.75% without overfitting disadvantage in predicting the '2020 League of Legends Worlds Championship' utilizing the real data in the middle of the game. It employed functions of Sigmoid, Relu and Logcosh, for better performance. The experiments found that the four variables largely affected the accuracy of predicting the match --- 'Dragon Gap', 'Level Gap', 'Blue Rift Heralds', and 'Tower Kills Gap,' and ordinary users can also use the model to help develop game strategies by focusing on four elements. Furthermore, the model can be applied to predicting the match of E-sports professional leagues around the world and to the useful training indicators for professional teams, contributing to vitalization of E-sports.

Study of regularization of long short-term memory(LSTM) for fall detection system of the elderly (장단기 메모리를 이용한 노인 낙상감지시스템의 정규화에 대한 연구)

  • Jeong, Seung Su;Kim, Namg Ho;Yu, Yun Seop
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1649-1654
    • /
    • 2021
  • In this paper, we introduce a regularization of long short-term memory (LSTM) based fall detection system using TensorFlow that can detect falls that can occur in the elderly. Fall detection uses data from a 3-axis acceleration sensor attached to the body of an elderly person and learns about a total of 7 behavior patterns, each of which is a pattern that occurs in daily life, and the remaining 3 are patterns for falls. During training, a normalization process is performed to effectively reduce the loss function, and the normalization performs a maximum-minimum normalization for data and a L2 regularization for the loss function. The optimal regularization conditions of LSTM using several falling parameters obtained from the 3-axis accelerometer is explained. When normalization and regularization rate λ for sum vector magnitude (SVM) are 127 and 0.00015, respectively, the best sensitivity, specificity, and accuracy are 98.4, 94.8, and 96.9%, respectively.

Comparison of Deep Learning Algorithm in Bus Boarding Assistance System for the Visually Impaired using Deep Learning and Traffic Information Open API (딥러닝과 교통정보 Open API를 이용한 시각장애인 버스 탑승 보조 시스템에서 딥러닝 알고리즘 성능 비교)

  • Kim, Tae hong;Yeo, Gil Su;Jeong, Se Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.388-390
    • /
    • 2021
  • This paper introduces a system that can help visually impaired people to board a bus using an embedded board with keypad, dot matrix, lidar sensor, NFC reader, a public data portal Open API system, and deep learning algorithm (YOLOv5). The user inputs the desired bus number through the NFC reader and keypad, and then obtains the location and expected arrival time information of the bus through the Open API real-time data through the voice output entered into the system. In addition, by displaying the bus number as the dot matrix, it can help the bus driver to wait for the visually impaired, and at the same time, a deep learning algorithm (YOLOv5) recognizes the bus number that stops in real time and detects the distance to the bus with a distance detection sensor such as lidar sensor.

  • PDF

Optimization Of Water Quality Prediction Model In Daechong Reservoir, Based On Multiple Layer Perceptron (다층 퍼셉트론을 기반으로 한 대청호 수질 예측 모델 최적화)

  • Lee, Hankyu;Kim, Jin Hui;Byeon, Seohyeon;Park, Kangdong;Shin, Jae-ki;Park, Yongeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.43-43
    • /
    • 2022
  • 유해 조류 대발생은 전국 각지의 인공호소나 하천에서 다발적으로 발생하며, 경관을 해치고 수질을 오염시키는 등 수자원에 부정적인 영향을 미친다. 본 연구에서는 인공호소에서 발생하는 유해 조류 대발생을 예측하기 위해 심층학습 기법을 이용하여 예측 모델을 개발하고자 하였다. 대상 지점은 대청호의 추동 지점으로 선정하였다. 대청호는 금강유역 중류에 위치한 댐으로, 약 150만명에 달하는 급수 인구수를 유지 중이기에 유해 남조 대발생 관리가 매우 중요한 장소이다. 학습용 데이터 구축은 대청호의 2011년 1월부터 2019년 12월까지 측정된 수질, 기상, 수문 자료를 입력 자료를 이용하였다. 수질 예측 모델의 구조는 다중 레이어 퍼셉트론(Multiple Layer Perceptron; MLP)으로, 입력과 한 개 이상의 은닉층, 그리고 출력층으로 구성된 인공신경망이다. 본 연구에서는 인공신경망의 은닉층 개수(1~3개)와 각각의 레이어에 적용되는 은닉 노드 개수(11~30개), 활성함수 5종(Linear, sigmoid, hyperbolic tangent, Rectified Linear Unit, Exponential Linear Unit)을 각각 하이퍼파라미터로 정하고, 모델의 성능을 최대로 발휘할 수 있는 조건을 찾고자 하였다. 하이퍼파라미터 최적화 도구는 Tensorflow에서 배포하는 Keras Tuner를 사용하였다. 모델은 총 3000 학습 epoch 가 진행되는 동안 최적의 가중치를 계산하도록 설계하였고, 이 결과를 매 반복마다 저장장치에 기록하였다. 모델 성능의 타당성은 예측과 실측 데이터 간의 상관관계를 R2, NSE, RMSE를 통해 산출하여 검증하였다. 모델 최적화 결과, 적합한 하이퍼파라미터는 최적화 횟수 총 300회에서 256 번째 반복 결과인 은닉층 개수 3개, 은닉 노드 수 각각 25개, 22개, 14개가 가장 적합하였고, 이에 따른 활성함수는 ELU, ReLU, Hyperbolic tangent, Linear 순서대로 사용되었다. 최적화된 하이퍼파라미터를 이용하여 모델 학습 및 검증을 수행한 결과, R2는 학습 0.68, 검증 0.61이었고 NSE는 학습 0.85, 검증 0.81, RMSE는 학습 0.82, 검증 0.92로 나타났다.

  • PDF

Analysis of methods for the model extraction without training data (학습 데이터가 없는 모델 탈취 방법에 대한 분석)

  • Hyun Kwon;Yonggi Kim;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.57-64
    • /
    • 2023
  • In this study, we analyzed how to steal the target model without training data. Input data is generated using the generative model, and a similar model is created by defining a loss function so that the predicted values of the target model and the similar model are close to each other. At this time, the target model has a process of learning so that the similar model is similar to it by gradient descent using the logit (logic) value of each class for the input data. The tensorflow machine learning library was used as an experimental environment, and CIFAR10 and SVHN were used as datasets. A similar model was created using the ResNet model as a target model. As a result of the experiment, it was found that the model stealing method generated a similar model with an accuracy of 86.18% for CIFAR10 and 96.02% for SVHN, producing similar predicted values to the target model. In addition, considerations on the model stealing method, military use, and limitations were also analyzed.

A Research about Open Source Distributed Computing System for Realtime CFD Modeling (SU2 with OpenCL and MPI) (실시간 CFD 모델링을 위한 오픈소스 분산 컴퓨팅 기술 연구)

  • Lee, Jun-Yeob;Oh, Jong-woo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.171-171
    • /
    • 2017
  • 전산유체역학(CFD: Computational Fluid Dynamics)를 이용한 스마트팜 환경 내부의 정밀 제어 연구가 진행 중이다. 시계열 데이터의 난해한 동적 해석을 극복하기위해, 비선형 모델링 기법의 일종인 인공신경망을 이용하는 방안을 고려하였다. 선행 연구를 통하여 환경 데이터의 비선형 모델링을 위한 Tensorflow활용 방법이 하드웨어 가속 기능을 바탕으로 월등한 성능을 보임을 확인하였다. 그럼에도 오프라인 일괄(Offline batch)처리 방식의 한계가 있는 인공신경망 모델링 기법과 현장 보급이 불가능한 고성능 하드웨어 연산 장치에 대한 대안 마련이 필요하다고 판단되었다. CFD 해석을 위한 Solver로 SU2(http://su2.stanford.edu)를 이용하였다. 운영 체제 및 컴파일러는 1) Mac OS X Sierra 10.12.2 Apple LLVM version 8.0.0 (clang-800.0.38), 2) Windows 10 x64: Intel C++ Compiler version 16.0, update 2, 3) Linux (Ubuntu 16.04 x64): g++ 5.4.0, 4) Clustered Linux (Ubuntu 16.04 x32): MPICC 3.3.a2를 선정하였다. 4번째 개발환경인 병렬 시스템의 경우 하드웨어 가속는 OpenCL(https://www.khronos.org/opencl/) 엔진을 이용하고 저전력 ARM 프로세서의 일종인 옥타코어 Samsung Exynos5422 칩을 장착한 ODROID-XU4(Hardkernel, AnYang, Korea) SBC(Single Board Computer)를 32식 병렬 구성하였다. 분산 컴퓨팅을 위한 환경은 Gbit 로컬 네트워크 기반 NFS(Network File System)과 MPICH(http://www.mpich.org/)로 구성하였다. 공간 분해능을 계측 주기보다 작게 분할할 경우 발생하는 미지의 바운더리 정보를 정의하기 위하여 3차원 Kriging Spatial Interpolation Method를 실험적으로 적용하였다. 한편 병렬 시스템 구성이 불가능한 1,2,3번 환경의 경우 내부적으로 이미 존재하는 멀티코어를 활용하고자 OpenMP(http://www.openmp.org/) 라이브러리를 활용하였다. 64비트 병렬 8코어로 동작하는 1,2,3번 운영환경의 경우 32비트 병렬 128코어로 동작하는 환경에 비하여 근소하게 2배 내외로 연산 속도가 빨랐다. 실시간 CFD 수행을 위한 분산 컴퓨팅 기술이 프로세서의 속도 및 운영체제의 정보 분배 능력에 따라 결정된다고 판단할 수 있었다. 이를 검증하기 위하여 4번 개발환경에서 운영체제를 64비트로 개선하여 5번째 환경을 구성하여 검증하였다. 상반되는 결과로 64비트 72코어로 동작하는 분산 컴퓨팅 환경에서 단일 프로세서 기반 멀티 코어(1,2,3번) 환경보다 보다 2.5배 내외 연산속도 향상이 있었다. ARM 프로세서용 64비트 운영체제의 완성도가 낮은 시점에서 추후 성공적인 실시간 CFD 모델링을 위한 지속적인 검토가 필요하다.

  • PDF

Study of Selection of Regression Equation for Flow-conditions using Machine-learning Method: Focusing on Nakdonggang Waterbody (머신러닝 기법을 활용한 유황별 LOADEST 모형의 적정 회귀식 선정 연구: 낙동강 수계를 중심으로)

  • Kim, Jonggun;Park, Youn Shik;Lee, Seoro;Shin, Yongchul;Lim, Kyoung Jae;Kim, Ki-sung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.59 no.4
    • /
    • pp.97-107
    • /
    • 2017
  • This study is to determine the coefficients of regression equations and to select the optimal regression equation in the LOADEST model after classifying the whole study period into 5 flow conditions for 16 watersheds located in the Nakdonggang waterbody. The optimized coefficients of regression equations were derived using the gradient descent method as a learning method in Tensorflow which is the engine of machine-learning method. In South Korea, the variability of streamflow is relatively high, and rainfall is concentrated in summer that can significantly affect the characteristic analysis of pollutant loads. Thus, unlike the previous application of the LOADEST model (adjusting whole study period), the study period was classified into 5 flow conditions to estimate the optimized coefficients and regression equations in the LOADEST model. As shown in the results, the equation #9 which has 7 coefficients related to flow and seasonal characteristics was selected for each flow condition in the study watersheds. When compared the simulated load (SS) to observed load, the simulation showed a similar pattern to the observation for the high flow condition due to the flow parameters related to precipitation directly. On the other hand, although the simulated load showed a similar pattern to observation in several watersheds, most of study watersheds showed large differences for the low flow conditions. This is because the pollutant load during low flow conditions might be significantly affected by baseflow or point-source pollutant load. Thus, based on the results of this study, it can be found that to estimate the continuous pollutant load properly the regression equations need to be determined with proper coefficients based on various flow conditions in watersheds. Furthermore, the machine-learning method can be useful to estimate the coefficients of regression equations in the LOADEST model.

Development of Vehicle Queue Length Estimation Model Using Deep Learning (딥러닝을 활용한 차량대기길이 추정모형 개발)

  • Lee, Yong-Ju;Hwang, Jae-Seong;Kim, Soo-Hee;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.2
    • /
    • pp.39-57
    • /
    • 2018
  • The purpose of this study was to construct an artificial intelligence model that learns and estimates the relationship between vehicle queue length and link travel time in urban areas. The vehicle queue length estimation model is modeled by three models. First of all, classify whether vehicle queue is a link overflow and estimate the vehicle queue length in the link overflow and non-overflow situations. Deep learning model is implemented as Tensorflow. All models are based DNN structure, and network structure which shows minimum error after learning and testing is selected by diversifying hidden layer and node number. The accuracy of the vehicle queue link overflow classification model was 98%, and the error of the vehicle queue estimation model in case of non-overflow and overflow situation was less than 15% and less than 5%, respectively. The average error per link was about 12%. Compared with the detecting data-based method, the error was reduced by about 39%.

Development of Artificial Intelligence Joint Model for Hybrid Finite Element Analysis (하이브리드 유한요소해석을 위한 인공지능 조인트 모델 개발)

  • Jang, Kyung Suk;Lim, Hyoung Jun;Hwang, Ji Hye;Shin, Jaeyoon;Yun, Gun Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.773-782
    • /
    • 2020
  • The development of joint FE models for deep learning neural network (DLNN)-based hybrid FEA is presented. Material models of bolts and bearings in the front axle of tractor, showing complex behavior induced by various tightening conditions, were replaced with DLNN models. Bolts are modeled as one-dimensional Timoshenko beam elements with six degrees of freedom, and bearings as three-dimensional solid elements. Stress-strain data were extracted from all elements after finite element analysis subjected to various load conditions, and DLNN for bolts and bearing were trained with Tensorflow. The DLNN-based joint models were implemented in the ABAQUS user subroutines where stresses from the next increment are updated and the algorithmic tangent stiffness matrix is calculated. Generalization of the trained DLNN in the FE model was verified by subjecting it to a new loading condition. Finally, the DLNN-based FEA for the front axle of the tractor was conducted and the feasibility was verified by comparing with results of a static structural experiment of the actual tractor.