• Title/Summary/Keyword: Learning speed

Search Result 1,158, Processing Time 0.03 seconds

Multiple Binarization Quadtree Framework for Optimizing Deep Learning-Based Smoke Synthesis Method

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.47-53
    • /
    • 2021
  • In this paper, we propose a quadtree-based optimization technique that enables fast Super-resolution(SR) computation by efficiently classifying and dividing physics-based simulation data required to calculate SR. The proposed method reduces the time required for quadtree computation by downscaling the smoke simulation data used as input data. By binarizing the density of the smoke in this process, a quadtree is constructed while mitigating the problem of numerical loss of density in the downscaling process. The data used for training is the COCO 2017 Dataset, and the artificial neural network uses a VGG19-based network. In order to prevent data loss when passing through the convolutional layer, similar to the residual method, the output value of the previous layer is added and learned. In the case of smoke, the proposed method achieved a speed improvement of about 15 to 18 times compared to the previous approach.

Lightweight Single Image Super-Resolution Convolution Neural Network in Portable Device

  • Wang, Jin;Wu, Yiming;He, Shiming;Sharma, Pradip Kumar;Yu, Xiaofeng;Alfarraj, Osama;Tolba, Amr
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4065-4083
    • /
    • 2021
  • Super-resolution can improve the clarity of low-resolution (LR) images, which can increase the accuracy of high-level compute vision tasks. Portable devices have low computing power and storage performance. Large-scale neural network super-resolution methods are not suitable for portable devices. In order to save the computational cost and the number of parameters, Lightweight image processing method can improve the processing speed of portable devices. Therefore, we propose the Enhanced Information Multiple Distillation Network (EIMDN) to adapt lower delay and cost. The EIMDN takes feedback mechanism as the framework and obtains low level features through high level features. Further, we replace the feature extraction convolution operation in Information Multiple Distillation Block (IMDB), with Ghost module, and propose the Enhanced Information Multiple Distillation Block (EIMDB) to reduce the amount of calculation and the number of parameters. Finally, coordinate attention (CA) is used at the end of IMDB and EIMDB to enhance the important information extraction from Spaces and channels. Experimental results show that our proposed can achieve convergence faster with fewer parameters and computation, compared with other lightweight super-resolution methods. Under the condition of higher peak signal-to-noise ratio (PSNR) and higher structural similarity (SSIM), the performance of network reconstruction image texture and target contour is significantly improved.

Introduction and Utilization of Time Series Data Integration Framework with Different Characteristics (서로 다른 특성의 시계열 데이터 통합 프레임워크 제안 및 활용)

  • Jisoo, Hwanga;Jaewon, Moon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.872-884
    • /
    • 2022
  • With the development of the IoT industry, different types of time series data are being generated in various industries, and it is evolving into research that reproduces and utilizes it through re-integration. In addition, due to data processing speed and issues of the utilization system in the actual industry, there is a growing tendency to compress the size of data when using time series data and integrate it. However, since the guidelines for integrating time series data are not clear and each characteristic such as data description time interval and time section is different, it is difficult to use it after batch integration. In this paper, two integration methods are proposed based on the integration criteria setting method and the problems that arise during integration of time series data. Based on this, integration framework of a heterogeneous time series data was constructed that is considered the characteristics of time series data, and it was confirmed that different heterogeneous time series data compressed can be used for integration and various machine learning.

Energy Management and Performance Evaluation of Fuel Cell Battery Based Electric Vehicle

  • Khadhraoui, Ahmed;SELMI, Tarek;Cherif, Adnene
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.37-44
    • /
    • 2022
  • Plug-in Hybrid electric vehicles (PHEV) show great potential to reduce gas emission, improve fuel efficiency and offer more driving range flexibility. Moreover, PHEV help to preserve the eco-system, climate changes and reduce the high demand for fossil fuels. To address this; some basic components and energy resources have been used, such as batteries and proton exchange membrane (PEM) fuel cells (FCs). However, the FC remains unsatisfactory in terms of power density and response. In light of the above, an electric storage system (ESS) seems to be a promising solution to resolve this issue, especially when it comes to the transient phase. In addition to the FC, a storage system made-up of an ultra-battery UB is proposed within this paper. The association of the FC and the UB lead to the so-called Fuel Cell Battery Electric Vehicle (FCBEV). The energy consumption model of a FCBEV has been built considering the power losses of the fuel cell, electric motor, the state of charge (SOC) of the battery, and brakes. To do so, the implementing a reinforcement-learning energy management strategy (EMS) has been carried out and the fuel cell efficiency has been optimized while minimizing the hydrogen fuel consummation per 100km. Within this paper the adopted approach over numerous driving cycles of the FCBEV has shown promising results.

Design and Implementation of High-Performance Cryptanalysis System Based on GPUDirect RDMA (GPUDirect RDMA 기반의 고성능 암호 분석 시스템 설계 및 구현)

  • Lee, Seokmin;Shin, Youngjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1127-1137
    • /
    • 2022
  • Cryptographic analysis and decryption technology utilizing the parallel operation of GPU has been studied in the direction of shortening the computation time of the password analysis system. These studies focus on optimizing the code to improve the speed of cryptographic analysis operations on a single GPU or simply increasing the number of GPUs to enhance parallel operations. However, using a large number of GPUs without optimization for data transmission causes longer data transmission latency than using a single GPU and increases the overall computation time of the cryptographic analysis system. In this paper, we investigate GPUDirect RDMA and related technologies for high-performance data processing in deep learning or HPC research fields in GPU clustering environments. In addition, we present a method of designing a high-performance cryptanalysis system using the relevant technologies. Furthermore, based on the suggested system topology, we present a method of implementing a cryptanalysis system using password cracking and GPU reduction. Finally, the performance evaluation results are presented according to demonstration of high-performance technology is applied to the implemented cryptanalysis system, and the expected effects of the proposed system design are shown.

A Study on Design Method of Smart Device for Industrial Disaster Detection and Index Derivation for Performance Evaluation (산업재해 감지 스마트 디바이스 설계 방안 및 성능평가를 위한 지표 도출에 관한 연구)

  • Ran Hee Lee;Ki Tae Bae;Joon Hoi Choi
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.120-128
    • /
    • 2023
  • There are various ICT technologies continuously being developed to reduce damage by industrial accidents. And research is being conducted to minimize damage in case of industrial accidents by utilizing sensors, IoT, big data, machine learning and artificial intelligence. In this paper, we propose a design method for a smart device capable of multilateral communication between devices and smart repeater in the communication shaded Areas such as closed areas of industrial sites, mountains, oceans, and coal mines. The proposed device collects worker's information such as worker location and movement speed, and environmental information such as terrain, wind direction, temperature, and humidity, and secures a safe distance between workers to warn in case of a dangerous situation and is designed to be attached to a helmet. For this, we proposed functional requirements for smart devices and design methods for implementing each requirement using sensors and modules in smart device. And we derived evaluation items for performance evaluation of the smart device and proposed an evaluation environment for performance evaluation in mountainous area.

Impact Assessment of an Autonomous Demand Responsive Bus in a Microscopic Traffic Simulation (미시적 교통 시뮬레이션을 활용한 실시간 수요대응형 자율주행 버스 영향 평가)

  • Sang ung Park;Joo young Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.6
    • /
    • pp.70-86
    • /
    • 2022
  • An autonomous demand-responsive bus with mobility-on-demand service is an innovative transport compensating for the disadvantages of an autonomous bus and a demand-responsive bus with mobility-on-demand service. However, less attention has been paid to the quantitative impact assessment of the autonomous demand-responsive bus due to the technological complexity of the autonomous demand-responsive bus. This study simulates autonomous demand-responsive bus trips by reinforcement learning on a microscopic traffic simulation to quantify the impact of the autonomous demand-responsive bus. The Chungju campus of the Korea National University of Transportation is selected as a testbed. Simulation results show that the introduction of the autonomous demand-responsive bus can reduce the wait time of passengers, average control delay, and increase the traffic speed compared to the results with fixed route bus service. This study contributes to the quantitative evaluation of the autonomous demand-responsive bus.

Prediction of pollution loads in agricultural reservoirs using LSTM algorithm: case study of reservoirs in Nonsan City

  • Heesung Lim;Hyunuk An;Gyeongsuk Choi;Jaenam Lee;Jongwon Do
    • Korean Journal of Agricultural Science
    • /
    • v.49 no.2
    • /
    • pp.193-202
    • /
    • 2022
  • The recurrent neural network (RNN) algorithm has been widely used in water-related research areas, such as water level predictions and water quality predictions, due to its excellent time series learning capabilities. However, studies on water quality predictions using RNN algorithms are limited because of the scarcity of water quality data. Therefore, most previous studies related to water quality predictions were based on monthly predictions. In this study, the quality of the water in a reservoir in Nonsan, Chungcheongnam-do Republic of Korea was predicted using the RNN-LSTM algorithm. The study was conducted after constructing data that could then be, linearly interpolated as daily data. In this study, we attempt to predict the water quality on the 7th, 15th, 30th, 45th and 60th days instead of making daily predictions of water quality factors. For daily predictions, linear interpolated daily water quality data and daily weather data (rainfall, average temperature, and average wind speed) were used. The results of predicting water quality concentrations (chemical oxygen demand [COD], dissolved oxygen [DO], suspended solid [SS], total nitrogen [T-N], total phosphorus [TP]) through the LSTM algorithm indicated that the predictive value was high on the 7th and 15th days. In the 30th day predictions, the COD and DO items showed R2 that exceeded 0.6 at all points, whereas the SS, T-N, and T-P items showed differences depending on the factor being assessed. In the 45th day predictions, it was found that the accuracy of all water quality predictions except for the DO item was sharply lowered.

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu;Yao Fang;Leihong Zhang;Dawei Zhang;Lulu Zheng
    • Current Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.237-247
    • /
    • 2023
  • In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

Multivariate Congestion Prediction using Stacked LSTM Autoencoder based Bidirectional LSTM Model

  • Vijayalakshmi, B;Thanga, Ramya S;Ramar, K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.216-238
    • /
    • 2023
  • In intelligent transportation systems, traffic management is an important task. The accurate forecasting of traffic characteristics like flow, congestion, and density is still active research because of the non-linear nature and uncertainty of the spatiotemporal data. Inclement weather, such as rain and snow, and other special events such as holidays, accidents, and road closures have a significant impact on driving and the average speed of vehicles on the road, which lowers traffic capacity and causes congestion in a widespread manner. This work designs a model for multivariate short-term traffic congestion prediction using SLSTM_AE-BiLSTM. The proposed design consists of a Bidirectional Long Short Term Memory(BiLSTM) network to predict traffic flow value and a Convolutional Neural network (CNN) model for detecting the congestion status. This model uses spatial static temporal dynamic data. The stacked Long Short Term Memory Autoencoder (SLSTM AE) is used to encode the weather features into a reduced and more informative feature space. BiLSTM model is used to capture the features from the past and present traffic data simultaneously and also to identify the long-term dependencies. It uses the traffic data and encoded weather data to perform the traffic flow prediction. The CNN model is used to predict the recurring congestion status based on the predicted traffic flow value at a particular urban traffic network. In this work, a publicly available Caltrans PEMS dataset with traffic parameters is used. The proposed model generates the congestion prediction with an accuracy rate of 92.74% which is slightly better when compared with other deep learning models for congestion prediction.