• Title/Summary/Keyword: deep-learning

Search Result 5,680, Processing Time 0.032 seconds

Performance Analysis of Deep Learning Based Transmit Power Control Using SINR Information Feedback in NOMA Systems (NOMA 시스템에서 SINR 정보 피드백을 이용한 딥러닝 기반 송신 전력 제어의 성능 분석)

  • Kim, Donghyeon;Lee, In-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.685-690
    • /
    • 2021
  • In this paper, we propose a deep learning-based transmit power control scheme to maximize the sum-rates while satisfying the minimum data-rate in downlink non-orthogonal multiple access (NOMA) systems. In downlink NOMA, we consider the co-channel interference that occurs from a base station other than the cell where the user is located, and the user feeds back the signal-to-interference plus noise power ratio (SINR) information instead of channel state information to reduce system feedback overhead. Therefore, the base station controls transmit power using only SINR information. The use of implicit SINR information has the advantage of decreasing the information dimension, but has disadvantage of reducing the data-rate. In this paper, we resolve this problem with deep learning-based training methods and show that the performance of training can be improved if the dimension of deep learning inputs is effectively reduced. Through simulation, we verify that the proposed deep learning-based power control scheme improves the sum-rate while satisfying the minimum data-rate.

Framework for Efficient Web Page Prediction using Deep Learning

  • Kim, Kyung-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.165-172
    • /
    • 2020
  • Recently, due to exponential growth of access information on the web, the importance of predicting a user's next web page use has been increasing. One of the methods that can be used for predicting user's next web page is deep learning. To predict next web page, web logs are analyzed by data preprocessing and then a user's next web page is predicted on the output of the analyzed web logs using a deep learning algorithm. In this paper, we propose a framework for web page prediction that includes methods for web log preprocessing followed by deep learning techniques for web prediction. To increase the speed of preprocessing of large web log, a Hadoop based MapReduce programming model is used. In addition, we present a web prediction system that uses an efficient deep learning technique on the output of web log preprocessing for training and prediction. Through experiment, we show the performance improvement of our proposed method over traditional methods. We also show the accuracy of our prediction.

Predicting a Queue Length Using a Deep Learning Model at Signalized Intersections (딥러닝 모형을 이용한 신호교차로 대기행렬길이 예측)

  • Na, Da-Hyuk;Lee, Sang-Soo;Cho, Keun-Min;Kim, Ho-Yeon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.6
    • /
    • pp.26-36
    • /
    • 2021
  • In this study, a deep learning model for predicting the queue length was developed using the information collected from the image detector. Then, a multiple regression analysis model, a statistical technique, was derived and compared using two indices of mean absolute error(MAE) and root mean square error(RMSE). From the results of multiple regression analysis, time, day of the week, occupancy, and bus traffic were found to be statistically significant variables. Occupancy showed the most strong impact on the queue length among the variables. For the optimal deep learning model, 4 hidden layers and 6 lookback were determined, and MAE and RMSE were 6.34 and 8.99. As a result of evaluating the two models, the MAE of the multiple regression model and the deep learning model were 13.65 and 6.44, respectively, and the RMSE were 19.10 and 9.11, respectively. The deep learning model reduced the MAE by 52.8% and the RMSE by 52.3% compared to the multiple regression model.

A Hybrid Method for Recognizing Existence of Power Lines in Infrared Images (적외선영상내 전력선 검출을 위한 하이브리드 방법)

  • Jonghee, Kim;Chanho, Jung
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.742-745
    • /
    • 2022
  • In this paper, we propose a hybrid image processing and deep learning-based method for detecting the presence of power lines in infrared images. Deep learning-based methods can learn feature vectors from a large number of data without much effort, resulting in outstanding performances in various fields. However, it is difficult to apply human intuition to the deep learning-based methods while image processing techniques can be used to apply human intuition. Based on these, we propose a method that exploits both advantages to detect the existence of power lines in infrared images. To this end, five methods have been applied and compared to find the most effective image processing technique for detecting the presence of power lines. As a result, the proposed method achieves 99.48% of accuracy which is higher than those of methods based on either image processing or deep learning.

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

A Study on Peak Load Prediction Using TCN Deep Learning Model (TCN 딥러닝 모델을 이용한 최대전력 예측에 관한 연구)

  • Lee Jung Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.251-258
    • /
    • 2023
  • It is necessary to predict peak load accurately in order to supply electric power and operate the power system stably. Especially, it is more important to predict peak load accurately in winter and summer because peak load is higher than other seasons. If peak load is predicted to be higher than actual peak load, the start-up costs of power plants would increase. It causes economic loss to the company. On the other hand, if the peak load is predicted to be lower than the actual peak load, blackout may occur due to a lack of power plants capable of generating electricity. Economic losses and blackouts can be prevented by minimizing the prediction error of the peak load. In this paper, the latest deep learning model such as TCN is used to minimize the prediction error of peak load. Even if the same deep learning model is used, there is a difference in performance depending on the hyper-parameters. So, I propose methods for optimizing hyper-parameters of TCN for predicting the peak load. Data from 2006 to 2021 were input into the model and trained, and prediction error was tested using data in 2022. It was confirmed that the performance of the deep learning model optimized by the methods proposed in this study is superior to other deep learning models.

Visual Explanation of a Deep Learning Solar Flare Forecast Model and Its Relationship to Physical Parameters

  • Yi, Kangwoo;Moon, Yong-Jae;Lim, Daye;Park, Eunsu;Lee, Harim
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.42.1-42.1
    • /
    • 2021
  • In this study, we present a visual explanation of a deep learning solar flare forecast model and its relationship to physical parameters of solar active regions (ARs). For this, we use full-disk magnetograms at 00:00 UT from the Solar and Heliospheric Observatory/Michelson Doppler Imager and the Solar Dynamics Observatory/Helioseismic and Magnetic Imager, physical parameters from the Space-weather HMI Active Region Patch (SHARP), and Geostationary Operational Environmental Satellite X-ray flare data. Our deep learning flare forecast model based on the Convolutional Neural Network (CNN) predicts "Yes" or "No" for the daily occurrence of C-, M-, and X-class flares. We interpret the model using two CNN attribution methods (guided backpropagation and Gradient-weighted Class Activation Mapping [Grad-CAM]) that provide quantitative information on explaining the model. We find that our deep learning flare forecasting model is intimately related to AR physical properties that have also been distinguished in previous studies as holding significant predictive ability. Major results of this study are as follows. First, we successfully apply our deep learning models to the forecast of daily solar flare occurrence with TSS = 0.65, without any preprocessing to extract features from data. Second, using the attribution methods, we find that the polarity inversion line is an important feature for the deep learning flare forecasting model. Third, the ARs with high Grad-CAM values produce more flares than those with low Grad-CAM values. Fourth, nine SHARP parameters such as total unsigned vertical current, total unsigned current helicity, total unsigned flux, and total photospheric magnetic free energy density are well correlated with Grad-CAM values.

  • PDF

Case Study of Building a Malicious Domain Detection Model Considering Human Habitual Characteristics: Focusing on LSTM-based Deep Learning Model (인간의 습관적 특성을 고려한 악성 도메인 탐지 모델 구축 사례: LSTM 기반 Deep Learning 모델 중심)

  • Jung Ju Won
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.65-72
    • /
    • 2023
  • This paper proposes a method for detecting malicious domains considering human habitual characteristics by building a Deep Learning model based on LSTM (Long Short-Term Memory). DGA (Domain Generation Algorithm) malicious domains exploit human habitual errors, resulting in severe security threats. The objective is to swiftly and accurately respond to changes in malicious domains and their evasion techniques through typosquatting to minimize security threats. The LSTM-based Deep Learning model automatically analyzes and categorizes generated domains as malicious or benign based on malware-specific features. As a result of evaluating the model's performance based on ROC curve and AUC accuracy, it demonstrated 99.21% superior detection accuracy. Not only can this model detect malicious domains in real-time, but it also holds potential applications across various cyber security domains. This paper proposes and explores a novel approach aimed at safeguarding users and fostering a secure cyber environment against cyber attacks.

Fishing Boat Rolling Movement of Time Series Prediction based on Deep Network Model (심층 네트워크 모델에 기반한 어선 횡동요 시계열 예측)

  • Donggyun Kim;Nam-Kyun Im
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.376-385
    • /
    • 2023
  • Fishing boat capsizing accidents account for more than half of all capsize accidents. These can occur for a variety of reasons, including inexperienced operation, bad weather, and poor maintenance. Due to the size and influence of the industry, technological complexity, and regional diversity, fishing ships are relatively under-researched compared to commercial ships. This study aimed to predict the rolling motion time series of fishing boats using an image-based deep learning model. Image-based deep learning can achieve high performance by learning various patterns in a time series. Three image-based deep learning models were used for this purpose: Xception, ResNet50, and CRNN. Xception and ResNet50 are composed of 177 and 184 layers, respectively, while CRNN is composed of 22 relatively thin layers. The experimental results showed that the Xception deep learning model recorded the lowest Symmetric mean absolute percentage error(sMAPE) of 0.04291 and Root Mean Squared Error(RMSE) of 0.0198. ResNet50 and CRNN recorded an RMSE of 0.0217 and 0.022, respectively. This confirms that the models with relatively deeper layers had higher accuracy.