• 제목/요약/키워드: deep convolutional neural networks (DCNN)

검색결과 22건 처리시간 0.021초

Enhanced Network Intrusion Detection using Deep Convolutional Neural Networks

  • Naseer, Sheraz;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.5159-5178
    • /
    • 2018
  • Network Intrusion detection is a rapidly growing field of information security due to its importance for modern IT infrastructure. Many supervised and unsupervised learning techniques have been devised by researchers from discipline of machine learning and data mining to achieve reliable detection of anomalies. In this paper, a deep convolutional neural network (DCNN) based intrusion detection system (IDS) is proposed, implemented and analyzed. Deep CNN core of proposed IDS is fine-tuned using Randomized search over configuration space. Proposed system is trained and tested on NSLKDD training and testing datasets using GPU. Performance comparisons of proposed DCNN model are provided with other classifiers using well-known metrics including Receiver operating characteristics (RoC) curve, Area under RoC curve (AuC), accuracy, precision-recall curve and mean average precision (mAP). The experimental results of proposed DCNN based IDS shows promising results for real world application in anomaly detection systems.

Application of deep convolutional neural network for short-term precipitation forecasting using weather radar-based images

  • Le, Xuan-Hien;Jung, Sungho;Lee, Giha
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2021년도 학술발표회
    • /
    • pp.136-136
    • /
    • 2021
  • In this study, a deep convolutional neural network (DCNN) model is proposed for short-term precipitation forecasting using weather radar-based images. The DCNN model is a combination of convolutional neural networks, autoencoder neural networks, and U-net architecture. The weather radar-based image data used here are retrieved from competition for rainfall forecasting in Korea (AI Contest for Rainfall Prediction of Hydroelectric Dam Using Public Data), organized by Dacon under the sponsorship of the Korean Water Resources Association in October 2020. This data is collected from rainy events during the rainy season (April - October) from 2010 to 2017. These images have undergone a preprocessing step to convert from weather radar data to grayscale image data before they are exploited for the competition. Accordingly, each of these gray images covers a spatial dimension of 120×120 pixels and has a corresponding temporal resolution of 10 minutes. Here, each pixel corresponds to a grid of size 4km×4km. The DCNN model is designed in this study to provide 10-minute predictive images in advance. Then, precipitation information can be obtained from these forecast images through empirical conversion formulas. Model performance is assessed by comparing the Score index, which is defined based on the ratio of MAE (mean absolute error) to CSI (critical success index) values. The competition results have demonstrated the impressive performance of the DCNN model, where the Score value is 0.530 compared to the best value from the competition of 0.500, ranking 16th out of 463 participating teams. This study's findings exhibit the potential of applying the DCNN model to short-term rainfall prediction using weather radar-based images. As a result, this model can be applied to other areas with different spatiotemporal resolutions.

  • PDF

비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발 (Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제22권6호
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.

딥 컨볼루션 신경망을 이용한 고용 소득 예측 (Predicting Employment Earning using Deep Convolutional Neural Networks)

  • 마렌드라;김나랑;최형림
    • 디지털융복합연구
    • /
    • 제16권6호
    • /
    • pp.151-161
    • /
    • 2018
  • 소득은 경제생활에서 중요하다. 소득을 예측할 수 있으면, 사람들은 음식, 집세와 같은 생활비를 지불 할 수 있는 예산을 세울 수 있을 뿐 아니라, 다른 재화 또는 비상사태를 위한 돈을 별도로 저축 할 수 있다. 또한 소득수준은 은행, 상점 및 서비스 회사에서 마케팅 목적 및 충성도가 높은 고객을 유치하는 데 활용 된다. 이는 소득이 다양한 고객 접점에서 사용되는 중요한 인구 통계 요소이기 때문이다. 따라서 기존 고객 및 잠재 고객에 대한 수입 예측이 필요하다. 이 연구에서는 소득을 예측하기 위해 SVM (Support Vector Machines), Gaussian, 의사 결정 트리, DCNN (Deep Convolutional Neural Networks)과 같은 기계 학습 기법을 사용하였다. 분석 결과 DCNN 방법이 본 연구에서 사용 된 다른 기계 학습 기법에 비해 최적의 결과(88%)를 제공하는 것으로 나타났다. 향후 PCA 같이 데이터 크기를 향상 시킨다면 더 좋은 연구 결과를 제시할 수 있을 것이다.

Toward Optimal FPGA Implementation of Deep Convolutional Neural Networks for Handwritten Hangul Character Recognition

  • Park, Hanwool;Yoo, Yechan;Park, Yoonjin;Lee, Changdae;Lee, Hakkyung;Kim, Injung;Yi, Kang
    • Journal of Computing Science and Engineering
    • /
    • 제12권1호
    • /
    • pp.24-35
    • /
    • 2018
  • Deep convolutional neural network (DCNN) is an advanced technology in image recognition. Because of extreme computing resource requirements, DCNN implementation with software alone cannot achieve real-time requirement. Therefore, the need to implement DCNN accelerator hardware is increasing. In this paper, we present a field programmable gate array (FPGA)-based hardware accelerator design of DCNN targeting handwritten Hangul character recognition application. Also, we present design optimization techniques in SDAccel environments for searching the optimal FPGA design space. The techniques we used include memory access optimization and computing unit parallelism, and data conversion. We achieved about 11.19 ms recognition time per character with Xilinx FPGA accelerator. Our design optimization was performed with Xilinx HLS and SDAccel environment targeting Kintex XCKU115 FPGA from Xilinx. Our design outperforms CPU in terms of energy efficiency (the number of samples per unit energy) by 5.88 times, and GPGPU in terms of energy efficiency by 5 times. We expect the research results will be an alternative to GPGPU solution for real-time applications, especially in data centers or server farms where energy consumption is a critical problem.

A Survey on Deep Convolutional Neural Networks for Image Steganography and Steganalysis

  • Hussain, Israr;Zeng, Jishen;Qin, Xinhong;Tan, Shunquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.1228-1248
    • /
    • 2020
  • Steganalysis & steganography have witnessed immense progress over the past few years by the advancement of deep convolutional neural networks (DCNN). In this paper, we analyzed current research states from the latest image steganography and steganalysis frameworks based on deep learning. Our objective is to provide for future researchers the work being done on deep learning-based image steganography & steganalysis and highlights the strengths and weakness of existing up-to-date techniques. The result of this study opens new approaches for upcoming research and may serve as source of hypothesis for further significant research on deep learning-based image steganography and steganalysis. Finally, technical challenges of current methods and several promising directions on deep learning steganography and steganalysis are suggested to illustrate how these challenges can be transferred into prolific future research avenues.

심층 합성곱 신경망을 이용한 교통신호등 인식 (Traffic Light Recognition Using a Deep Convolutional Neural Network)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제21권11호
    • /
    • pp.1244-1253
    • /
    • 2018
  • The color of traffic light is sensitive to various illumination conditions. Especially it loses the hue information when oversaturation happens on the lighting area. This paper proposes a traffic light recognition method robust to these illumination variations. The method consists of two steps of traffic light detection and recognition. It just uses the intensity and saturation in the first step of traffic light detection. It delays the use of hue information until it reaches to the second step of recognizing the signal of traffic light. We utilized a deep learning technique in the second step. We designed a deep convolutional neural network(DCNN) which is composed of three convolutional networks and two fully connected networks. 12 video clips were used to evaluate the performance of the proposed method. Experimental results show the performance of traffic light detection reporting the precision of 93.9%, the recall of 91.6%, and the recognition accuracy of 89.4%. Considering that the maximum distance between the camera and traffic lights is 70m, the results shows that the proposed method is effective.

텍스처 특징 기반 제어점 선택 알고리즘과 병렬 심층 컨볼루션 신경망을 이용한 새로운 얼굴 모핑 방법 (A New Face Morphing Method using Texture Feature-based Control Point Selection Algorithm and Parallel Deep Convolutional Neural Network)

  • 박진혁;;임선자;이석환;권기룡
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.176-188
    • /
    • 2022
  • In this paper, we propose a compact method for anthropomorphism that uses Deep Convolutional Neural Networks (DCNN) to detect the similarities between a human face and an animal face. We also apply texture feature-based morphing between them. We propose a basic texture feature-based morphing system for morphing between human faces only. The entire anthropomorphism process starts with the creation of an animal face classifier using a parallel DCNN that determines the most similar animal face to a given human face. The significance of our network is that it contains four sets of convolutional functions that run in parallel, allowing it to extract more features than a linear DCNN network. Our employed texture feature algorithm-based automatic morphing system recognizes the facial features of the human face and takes the Control Points automatically, rather than the traditional human aiding manual morphing system, once the similarity was established. The simulation results show that our suggested DCNN surpasses its competitors with a 92.0% accuracy rate. It also ensures that the most similar animal classes are found, and the texture-based morphing technology automatically completes the morphing process, ensuring a smooth transition from one image to another.

Deep Convolution Neural Networks in Computer Vision: a Review

  • Yoo, Hyeon-Joong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권1호
    • /
    • pp.35-43
    • /
    • 2015
  • Over the past couple of years, tremendous progress has been made in applying deep learning (DL) techniques to computer vision. Especially, deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance on standard recognition datasets and tasks such as ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Among them, GoogLeNet network which is a radically redesigned DCNN based on the Hebbian principle and scale invariance set the new state of the art for classification and detection in the ILSVRC 2014. Since there exist various deep learning techniques, this review paper is focusing on techniques directly related to DCNNs, especially those needed to understand the architecture and techniques employed in GoogLeNet network.

Convolutional Neural Network (CNN) 기반의 단백질 간 상호 작용 추출 (Extraction of Protein-Protein Interactions based on Convolutional Neural Network (CNN))

  • 최성필
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권3호
    • /
    • pp.194-198
    • /
    • 2017
  • 본 논문에서는 학술 문헌에서 표현된 단백질 간 상호 작용(Protein-Protein Interaction) 정보를 자동으로 추출하기 위한 확장된 형태의 Convolutional Neural Network (CNN) 모델을 제안한다. 이 모델은 기존에 관계 추출(Relation Extraction)을 위해 고안된 단순 자질 기반의 CNN 모델을 확장하여 다양한 전역 자질들을 추가적으로 적용함으로써 성능을 개선할 수 있는 장점이 있다. PPI 추출 성능 평가를 위해서 많이 활용되고 있는 준거 평가 컬렉션인 AIMed를 이용한 실험에서 F-스코어 기준으로 78.0%를 나타내어 현재까지 도출된 세계 최고 성능에 비해 8.3% 높은 성능을 나타내었다. 추가적으로 CNN 모델이 복잡한 언어 처리를 통한 자질 추출 작업을 하지 않고도 단백질간 상호 작용 추출에 높은 성능을 나타냄을 보였다.