• 제목/요약/키워드: CNN(Convolutional neural networks)

검색결과 354건 처리시간 0.027초

GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법 (Efficient Thread Allocation Method of Convolutional Neural Network based on GPGPU)

  • 김민철;이광엽
    • 예술인문사회 융합 멀티미디어 논문지
    • /
    • 제7권10호
    • /
    • pp.935-943
    • /
    • 2017
  • 많은 양의 데이터 기반으로 학습하는 neural network 중 이미지 분류나 음성 인식 등에 사용되어 지고 있는 CNN(Convolution neural network)는 현재까지도 우수한 성능을 가진 구조로 계속적으로 발전되고 있다. 제한된 자원을 가진 임베디드 시스템에서 활용하기에는 많은 어려움이 있다. 그래서 미리 학습된 가중치를 사용하지만 여전히 한계점이 있기 때문에 이를 해결하기 위해 GPU의 범용 연산을 위해서 사용하는 GP-GPU(General-Purpose computing on Graphics Processing Units)를 활용하는 추세다. CNN은 단순하고 반복적인 연산을 수행하기 때문에 SIMT(Single Instruction Multiple Thread)기반의 GPGPU에서 스레드 할당과 활용 방법에 따라 연산 속도가 많이 달라진다. 스레드로 Convolution 연산과 Pooling 연산을 수행할 때 쉬어야 하는 스레드가 발생하는 데 이러한 문제를 해결하기 위해 남은 스레드가 다음 피쳐맵과 커널 계산에 활용되는 방법을 사용함으로써 연산 속도를 증가시켰다.

CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법 (Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks)

  • 김호준
    • 지능정보연구
    • /
    • 제16권2호
    • /
    • pp.95-108
    • /
    • 2010
  • 본 연구에서는 동영상으로부터 동적 수신호 패턴을 효과적으로 인식하기 위한 방법론으로서 복합형 신경망 모델을 제안한다. 제안된 모델은 특징추출 모듈과 패턴분류 모듈로 구성되는데, 이들 각각을 위하여 수정된 구조의 CNN 모델과, WFMM 모델을 도입한다. 또한 목표물의 움직임 정보에 기초한 시공간적 템플릿 구조의 데이터표현을 소개한다. 본 논문에서는 우선 수신호 패턴 데이터에서 특징점의 시간적 변이 및 공간적 변이에 의한 영향을 보완하기 위하여 3차원 수용영역 구조로 확장된 CNN 모델을 제시한다. 이어서 패턴분류 단계를 위하여 가중치를 갖는 구조의 FMM 신경망 모델을 소개하고, 신경망의 구조와 동작특성에 관해 기술한다. 또한 제안된 모델이 기존의 FMM 신경망에서 중첩 하이퍼박스의 축소과정에서 발생하는 학습효과의 왜곡현상을 개선할 수 있음을 보인다. 응용으로 가전제품 원격제어 문제를 전제하여 간략화된 수신호패턴 인식 문제에 적용한 실험결과로부터 제안된 이론의 타당성을 고찰한다.

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • 스마트미디어저널
    • /
    • 제9권1호
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

깊은 신경망 기반 대용량 텍스트 데이터 분류 기술 (Large-Scale Text Classification with Deep Neural Networks)

  • 조휘열;김진화;김경민;장정호;엄재홍;장병탁
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권5호
    • /
    • pp.322-327
    • /
    • 2017
  • 문서 분류 문제는 오랜 기간 동안 자연어 처리 분야에서 연구되어 왔다. 우리는 기존 컨볼루션 신경망을 이용했던 연구에서 나아가, 순환 신경망에 기반을 둔 문서 분류를 수행하였고 그 결과를 종합하여 제시하려 한다. 컨볼루션 신경망은 단층 컨볼루션 신경망을 사용했으며, 순환 신경망은 가장 성능이 좋다고 알려져 있는 장기-단기 기억 신경망과 회로형 순환 유닛을 활용하였다. 실험 결과, 분류 정확도는 Multinomial Naïve Bayesian Classifier < SVM < LSTM < CNN < GRU의 순서로 나타났다. 따라서 텍스트 문서 분류 문제는 시퀀스를 고려하는 것 보다는 문서의 feature를 추출하여 분류하는 문제에 가깝다는 것을 확인할 수 있었다. 그리고 GRU가 LSTM보다 문서의 feature 추출에 더 적합하다는 것을 알 수 있었으며 적절한 feature와 시퀀스 정보를 함께 활용할 때 가장 성능이 잘 나온다는 것을 확인할 수 있었다.

컨볼루션 신경망의 특징맵을 사용한 객체 추적 (Object Tracking using Feature Map from Convolutional Neural Network)

  • 임수창;김도연
    • 한국멀티미디어학회논문지
    • /
    • 제20권2호
    • /
    • pp.126-133
    • /
    • 2017
  • The conventional hand-crafted features used to track objects have limitations in object representation. Convolutional neural networks, which show good performance results in various areas of computer vision, are emerging as new ways to break through the limitations of feature extraction. CNN extracts the features of the image through layers of multiple layers, and learns the kernel used for feature extraction by itself. In this paper, we use the feature map extracted from the convolution layer of the convolution neural network to create an outline model of the object and use it for tracking. We propose a method to adaptively update the outline model to cope with various environment change factors affecting the tracking performance. The proposed algorithm evaluated the validity test based on the 11 environmental change attributes of the CVPR2013 tracking benchmark and showed excellent results in six attributes.

Comparative Study of Ship Image Classification using Feedforward Neural Network and Convolutional Neural Network

  • Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권3호
    • /
    • pp.221-227
    • /
    • 2024
  • In autonomous navigation systems, the need for fast and accurate image processing using deep learning and advanced sensor technologies is paramount. These systems rely heavily on the ability to process and interpret visual data swiftly and precisely to ensure safe and efficient navigation. Despite the critical importance of such capabilities, there has been a noticeable lack of research specifically focused on ship image classification for maritime applications. This gap highlights the necessity for more in-depth studies in this domain. In this paper, we aim to address this gap by presenting a comprehensive comparative study of ship image classification using two distinct neural network models: the Feedforward Neural Network (FNN) and the Convolutional Neural Network (CNN). Our study involves the application of both models to the task of classifying ship images, utilizing a dataset specifically prepared for this purpose. Through our analysis, we found that the Convolutional Neural Network demonstrates significantly more effective performance in accurately classifying ship images compared to the Feedforward Neural Network. The findings from this research are significant as they can contribute to the advancement of core source technologies for maritime autonomous navigation systems. By leveraging the superior image classification capabilities of convolutional neural networks, we can enhance the accuracy and reliability of these systems. This improvement is crucial for the development of more efficient and safer autonomous maritime operations, ultimately contributing to the broader field of autonomous transportation technology.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권2호
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

Reconstruction of Terrestrial Water Storage of GRACE/GFO Using Convolutional Neural Network and Climate Data

  • Jeon, Woohyu;Kim, Jae-Seung;Seo, Ki-Weon
    • 한국지구과학회지
    • /
    • 제42권4호
    • /
    • pp.445-458
    • /
    • 2021
  • Gravity Recovery and Climate Experiment (GRACE) gravimeter satellites observed the Earth gravity field with unprecedented accuracy since 2002. After the termination of GRACE mission, GRACE Follow-on (GFO) satellites successively observe global gravity field, but there is missing period between GRACE and GFO about one year. Many previous studies estimated terrestrial water storage (TWS) changes using hydrological models, vertical displacements from global navigation satellite system observations, altimetry, and satellite laser ranging for a continuity of GRACE and GFO data. Recently, in order to predict TWS changes, various machine learning methods are developed such as artificial neural network and multi-linear regression. Previous studies used hydrological and climate data simultaneously as input data of the learning process. Further, they excluded linear trends in input data and GRACE/GFO data because the trend components obtained from GRACE/GFO data were assumed to be the same for other periods. However, hydrological models include high uncertainties, and observational period of GRACE/GFO is not long enough to estimate reliable TWS trends. In this study, we used convolutional neural networks (CNN) method incorporating only climate data set (temperature, evaporation, and precipitation) to predict TWS variations in the missing period of GRACE/GFO. We also make CNN model learn the linear trend of GRACE/GFO data. In most river basins considered in this study, our CNN model successfully predicts seasonal and long-term variations of TWS change.

CNN을 활용한 교통 표지판 이미지 분류 인식 (Recognition of Classification of Traffic Sign Images Using CNN)

  • 김문정;채신록;홍은기;황보민;문유진
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2023년도 제67차 동계학술대회논문집 31권1호
    • /
    • pp.317-318
    • /
    • 2023
  • 본 논문에서는 CNN(Convolutional Neural Network)을 활용하여 자율주행 자동차가 각 국가별 교통 규칙 및 도로 표시를 이해하고 정확한 주행을 할 수 있도록, Deep Neural Network 시스템을 설계하고 구현하는 방법을 제안한다. 연구 방법으로는 한국도로교통공단(koroad)에서 제공하는 교통안전표지 일람표 이미지를 학습하여, 차량이 자율주행을 하기 위해 요구되는 표지판을 인식할 수 있도록 하였다. 본 논문에서 설계한 학습 시스템으로 도로교통표지판의 인식에 성공했으며, 이를 통해 자율주행차량이 표지판을 인식할 수 있으며, 시각장애인 및 고령운전자를 위한 지원 역시 가능하다고 사료된다.

  • PDF

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.