• Title/Summary/Keyword: 딥러닝 모델

Search Result 2,120, Processing Time 0.027 seconds

Learning Achievement Prediction Model based on Deep Learning (딥러닝 기반의 학습 성취 예측 모델)

  • Lee, Myung-Suk;Pak, Ju-Geon;Lee, Joo-Hwa
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.245-247
    • /
    • 2021
  • 최근 코로나 19로 인하여 온라인 강의가 증가하고 있으며 이를 활용한 학습 분석에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 학습 분석 중 학습 결과에 영향을 미칠 수 있는 학습 활동 데이터를 수집하여 학습 결과를 예측하는 모델을 설계하고자 한다. 예측 모델은 기계학습을 이용하며 이전 학기의 학습 결과 데이터를 학습시켜 학습 결과에 영향을 미치는 학습 활동 데이터를 도출한다. 도출된 데이터를 이용하여 차후 학습자의 학습 결과를 예측한다. 학습 결과를 예측하기 위한 모델로 딥러닝의 DNN을 활용한다. 향후 연구로는 예측한 결과를 바탕으로 학습자의 학습 동기 부여와 학습 지도 방향을 정하는 것이다.

  • PDF

A Review of Seismic Full Waveform Inversion Based on Deep Learning (딥러닝 기반 탄성파 전파형 역산 연구 개관)

  • Sukjoon, Pyun;Yunhui, Park
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.227-241
    • /
    • 2022
  • Full waveform inversion (FWI) in the field of seismic data processing is an inversion technique that is used to estimate the velocity model of the subsurface for oil and gas exploration. Recently, deep learning (DL) technology has been increasingly used for seismic data processing, and its combination with FWI has attracted remarkable research efforts. For example, DL-based data processing techniques have been utilized for preprocessing input data for FWI, enabling the direct implementation of FWI through DL technology. DL-based FWI can be divided into the following methods: pure data-based, physics-based neural network, encoder-decoder, reparameterized FWI, and physics-informed neural network. In this review, we describe the theory and characteristics of the methods by systematizing them in the order of advancements. In the early days of DL-based FWI, the DL model predicted the velocity model by preparing a large training data set to adopt faithfully the basic principles of data science and apply a pure data-based prediction model. The current research trend is to supplement the shortcomings of the pure data-based approach using the loss function consisting of seismic data or physical information from the wave equation itself in deep neural networks. Based on these developments, DL-based FWI has evolved to not require a large amount of learning data, alleviating the cycle-skipping problem, which is an intrinsic limitation of FWI, and reducing computation times dramatically. The value of DL-based FWI is expected to increase continually in the processing of seismic data.

Development of Highway Traffic Information Prediction Models Using the Stacking Ensemble Technique Based on Cross-validation (스태킹 앙상블 기법을 활용한 고속도로 교통정보 예측모델 개발 및 교차검증에 따른 성능 비교)

  • Yoseph Lee;Seok Jin Oh;Yejin Kim;Sung-ho Park;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.1-16
    • /
    • 2023
  • Accurate traffic information prediction is considered to be one of the most important aspects of intelligent transport systems(ITS), as it can be used to guide users of transportation facilities to avoid congested routes. Various deep learning models have been developed for accurate traffic prediction. Recently, ensemble techniques have been utilized to combine the strengths and weaknesses of various models in various ways to improve prediction accuracy and stability. Therefore, in this study, we developed and evaluated a traffic information prediction model using various deep learning models, and evaluated the performance of the developed deep learning models as a stacking ensemble. The individual models showed error rates within 10% for traffic volume prediction and 3% for speed prediction. The ensemble model showed higher accuracy compared to other models when no cross-validation was performed, and when cross-validation was performed, it showed a uniform error rate in long-term forecasting.

Cancellation Scheme of impusive Noise based on Deep Learning in Power Line Communication System (딥러닝 기반 전력선 통신 시스템의 임펄시브 잡음 제거 기법)

  • Seo, Sung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.29-33
    • /
    • 2022
  • In this paper, we propose the deep learning based pre interference cancellation scheme algorithm for power line communication (PLC) systems in smart grid. The proposed scheme estimates the channel noise information by applying a deep learning model at the transmitter. Then, the estimated channel noise is updated in database. In the modulator, the channel noise which reduces the power line communication performance is effectively removed through interference cancellation technique. As an impulsive noise model, Middleton Class A interference model was employed. The performance is evaluated in terms of bit error rate (BER). From the simulation results, it is confirmed that the proposed scheme has better BER performance compared to the theoretical model based on additive white Gaussian noise. As a result, the proposed interference cancellation with deep learning improves the signal quality of PLC systems by effectively removing the channel noise. The results of the paper can be applied to PLC for smart grid and general communication systems.

Implementation of AWS-based deep learning platform using streaming server and performance comparison experiment (스트리밍 서버를 이용한 AWS 기반의 딥러닝 플랫폼 구현과 성능 비교 실험)

  • Yun, Pil-Sang;Kim, Do-Yun;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.591-596
    • /
    • 2019
  • In this paper, we implemented a deep learning operation structure with less influence of local PC performance. In general, the deep learning model has a large amount of computation and is heavily influenced by the performance of the processing PC. In this paper, we implemented deep learning operation using AWS and streaming server to reduce this limitation. First, deep learning operations were performed on AWS so that deep learning operation would work even if the performance of the local PC decreased. However, with AWS, the output is less real-time relative to the input when computed. Second, we use streaming server to increase the real-time of deep learning model. If the streaming server is not used, the real-time performance is poor because the images must be processed one by one or by stacking the images. We used the YOLO v3 model as a deep learning model for performance comparison experiments, and compared the performance of local PCs with instances of AWS and GTX1080, a high-performance GPU. The simulation results show that the test time per image is 0.023444 seconds when using the p3 instance of AWS, which is similar to the test time per image of 0.027099 seconds on a local PC with the high-performance GPU GTX1080.

Analysis of Deep Learning Model Vulnerability According to Input Mutation (입력 변이에 따른 딥러닝 모델 취약점 연구 및 검증)

  • Kim, Jaeuk;Park, Leo Hyun;Kwon, Taekyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.1
    • /
    • pp.51-59
    • /
    • 2021
  • The deep learning model can produce false prediction results due to inputs that deviate from training data through variation, which leads to fatal accidents in areas such as autonomous driving and security. To ensure reliability of the model, the model's coping ability for exceptional situations should be verified through various mutations. However, previous studies were carried out on limited scope of models and used several mutation types without separating them. Based on the CIFAR10 data set, widely used dataset for deep learning verification, this study carries out reliability verification for total of six models including various commercialized models and their additional versions. To this end, six types of input mutation algorithms that may occur in real life are applied individually with their various parameters to the dataset to compare the accuracy of the models for each of them to rigorously identify vulnerabilities of the models associated with a particular mutation type.

Adversarial Example Detection and Classification Model Based on the Class Predicted by Deep Learning Model (데이터 예측 클래스 기반 적대적 공격 탐지 및 분류 모델)

  • Ko, Eun-na-rae;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1227-1236
    • /
    • 2021
  • Adversarial attack, one of the attacks on deep learning classification model, is attack that add indistinguishable perturbations to input data and cause deep learning classification model to misclassify the input data. There are various adversarial attack algorithms. Accordingly, many studies have been conducted to detect adversarial attack but few studies have been conducted to classify what adversarial attack algorithms to generate adversarial input. if adversarial attacks can be classified, more robust deep learning classification model can be established by analyzing differences between attacks. In this paper, we proposed a model that detects and classifies adversarial attacks by constructing a random forest classification model with input features extracted from a target deep learning model. In feature extraction, feature is extracted from a output value of hidden layer based on class predicted by the target deep learning model. Through Experiments the model proposed has shown 3.02% accuracy on clean data, 0.80% accuracy on adversarial data higher than the result of pre-existing studies and classify new adversarial attack that was not classified in pre-existing studies.

Performance Enhancement Technique of Visible Communication Systems based on Deep-Learning (딥러닝 기반 가시광 통신 시스템의 성능 향상 기법)

  • Seo, Sung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.51-55
    • /
    • 2021
  • In this paper, we propose the deep learning based interference cancellation scheme algorithm for visible light communication (VLC) systems in smart building. The proposed scheme estimates the channel noise information by applying a deep learning model. Then, the estimated channel noise is updated in database. In the modulator, the channel noise which reduces the VLC performance is effectively removed through interference cancellation technique. The performance is evaluated in terms of bit error rate (BER). From the simulation results, it is confirmed that the proposed scheme has better BER performance. Consequently, the proposed interference cancellation with deep learning improves the signal quality of VLC systems by effectively removing the channel noise. The results of the paper can be applied to VLC for smart building and general communication systems.

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.

Fake news detection using deep learning (딥러닝 기법을 이용한 가짜뉴스 탐지)

  • Lee, Dong-Ho;Lee, Jung-Hoon;Kim, Yu-Ri;Kim, Hyeong-Jun;Park, Seung-Myun;Yang, Yu-Jun;Shin, Woong-Bi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.384-387
    • /
    • 2018
  • SNS가 급속도로 확산되며 거짓 정보를 언론으로 위장한 형태인 가짜뉴스는 큰 사회적 문제가 되었다. 본 논문에서는 이를 해결하기 위해 한글 가짜뉴스 탐지를 위한 딥러닝 모델을 제시한다. 기존 연구들은 영어에 적합한 모델들을 제시하고 있으나, 한글은 같은 의미라도 더 짧은 문장으로 표현 가능해 딥러닝을 하기 위한 특징수가 부족하여 깊은 신경망을 운용하기 어렵다는 점과, 형태소 중의성으로 인한 의미 분석의 어려움으로 인해 기존 오델들을 적용하기에는 한계가 있다. 이를 해결하기 위해 얕은 CNN 모델과 음절 단위로 학습된 단어 임베딩 모델인 'Fasttext'를 활용하여 시스템을 구현하고, 이를 학습시켜 검증하였다.