• 제목/요약/키워드: Deep CNN

검색결과 1,171건 처리시간 0.027초

딥러닝을 이용한 전문분야 문서 분류 시스템 개발 (Development of Special Documents Classification System using Deep Learning)

  • 진상현;황상호;강원석;손창식
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.589-591
    • /
    • 2019
  • 본 논문에서는 고도장비의 운용 및 정비를 위한 교육훈련 시스템 개발을 위해 자연어 처리와 딥러닝 기술을 이용하여 항공정비와 관련된 전문분야의 문서 분류가 가능한 방법을 제안하고자 한다. 문서 분류 모델의 개발을 위해 항공정비 교범을 텍스트 파일로 변환하여 총 4917개의 문서를 생성하였으며, 정비사 개인별 정비능력 관리(IMQC)를 기준으로 12개의 범주로 구분하였다. 수집된 문서는 전문분야의 문서인 점을 고려하여 전문용어 사전을 추가하였으며, KoNLPy를 이용하여 전처리를 수행하였다. 전문분야의 문서는 범주에 상관없이 문서 내용의 유사도가 매우 높은 특징을 가지고 있어, 특정 범주내에서 중요한 정도를 잘 표현 할 수 있는 TF-ICF를 이용하여 특징 추출을 하였다. 이후 합성곱 신경망(CNN)을 이용하여 특징 맵을 생성한 후 완전 결합 계층을 통하여 분류하였으며, 테스트 문서 983건을 분류한 결과 평균 73.6%의 분류성능을 보여주었다.

Waste Classification by Fine-Tuning Pre-trained CNN and GAN

  • Alsabei, Amani;Alsayed, Ashwaq;Alzahrani, Manar;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.65-70
    • /
    • 2021
  • Waste accumulation is becoming a significant challenge in most urban areas and if it continues unchecked, is poised to have severe repercussions on our environment and health. The massive industrialisation in our cities has been followed by a commensurate waste creation that has become a bottleneck for even waste management systems. While recycling is a viable solution for waste management, it can be daunting to classify waste material for recycling accurately. In this study, transfer learning models were proposed to automatically classify wastes based on six materials (cardboard, glass, metal, paper, plastic, and trash). The tested pre-trained models were ResNet50, VGG16, InceptionV3, and Xception. Data augmentation was done using a Generative Adversarial Network (GAN) with various image generation percentages. It was found that models based on Xception and VGG16 were more robust. In contrast, models based on ResNet50 and InceptionV3 were sensitive to the added machine-generated images as the accuracy degrades significantly compared to training with no artificial data.

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.

영상수준과 픽셀수준 분류를 결합한 영상 의미분할 (Semantic Image Segmentation Combining Image-level and Pixel-level Classification)

  • 김선국;이칠우
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.

funcGNN과 Siamese Network의 코드 유사성 분석 성능비교 (Comparison of Code Similarity Analysis Performance of funcGNN and Siamese Network)

  • 최동빈;조인수;박용범
    • 반도체디스플레이기술학회지
    • /
    • 제20권3호
    • /
    • pp.113-116
    • /
    • 2021
  • As artificial intelligence technologies, including deep learning, develop, these technologies are being introduced to code similarity analysis. In the traditional analysis method of calculating the graph edit distance (GED) after converting the source code into a control flow graph (CFG), there are studies that calculate the GED through a trained graph neural network (GNN) with the converted CFG, Methods for analyzing code similarity through CNN by imaging CFG are also being studied. In this paper, to determine which approach will be effective and efficient in researching code similarity analysis methods using artificial intelligence in the future, code similarity is measured through funcGNN, which measures code similarity using GNN, and Siamese Network, which is an image similarity analysis model. The accuracy was compared and analyzed. As a result of the analysis, the error rate (0.0458) of the Siamese network was bigger than that of the funcGNN (0.0362).

Implementation of Cough Detection System Using IoT Sensor in Respirator

  • Shin, Woochang
    • International journal of advanced smart convergence
    • /
    • 제9권4호
    • /
    • pp.132-138
    • /
    • 2020
  • Worldwide, the number of corona virus disease 2019 (COVID-19) confirmed cases is rapidly increasing. Although vaccines and treatments for COVID-19 are being developed, the disease is unlikely to disappear completely. By attaching a smart sensor to the respirator worn by medical staff, Internet of Things (IoT) technology and artificial intelligence (AI) technology can be used to automatically detect the medical staff's infection symptoms. In the case of medical staff showing symptoms of the disease, appropriate medical treatment can be provided to protect the staff from the greater risk. In this study, we design and develop a system that detects cough, a typical symptom of respiratory infectious diseases, by applying IoT technology and artificial technology to respiratory protection. Because the cough sound is distorted within the respirator, it is difficult to guarantee accuracy in the AI model learned from the general cough sound. Therefore, coughing and non-coughing sounds were recorded using a sensor attached to a respirator, and AI models were trained and performance evaluated with this data. Mel-spectrogram conversion method was used to efficiently classify sound data, and the developed cough recognition system had a sensitivity of 95.12% and a specificity of 100%, and an overall accuracy of 97.94%.

Improved Convolutional Neural Network Based Cooperative Spectrum Sensing For Cognitive Radio

  • Uppala, Appala Raju;Narasimhulu C, Venkata;Prasad K, Satya
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권6호
    • /
    • pp.2128-2147
    • /
    • 2021
  • Cognitive radio systems are being implemented recently to tackle spectrum underutilization problems and aid efficient data traffic. Spectrum sensing is the crucial step in cognitive applications in which cognitive user detects the presence of primary user (PU) in a particular channel thereby switching to another channel for continuous transmission. In cognitive radio systems, the capacity to precisely identify the primary user's signal is essential to secondary user so as to use idle licensed spectrum. Based on the inherent capability, a new spectrum sensing technique is proposed in this paper to identify all types of primary user signals in a cognitive radio condition. Hence, a spectrum sensing algorithm using improved convolutional neural network and long short-term memory (CNN-LSTM) is presented. The principle used in our approach is simulated annealing that discovers reasonable number of neurons for each layer of a completely associated deep neural network to tackle the streamlining issue. The probability of detection is considered as the determining parameter to find the efficiency of the proposed algorithm. Experiments are carried under different signal to noise ratio to indicate better performance of the proposed algorithm. The PU signal will have an associated modulation format and hence identifying the presence of a modulation format itself establishes the presence of PU signal.

딥러닝을 이용한 육불화텅스텐(WF6) 제조 공정의 지능형 영상 감지 시스템 구현 (Implementation of an Intelligent Video Detection System using Deep Learning in the Manufacturing Process of Tungsten Hexafluoride)

  • 손승용;김영목;최두현
    • 한국재료학회지
    • /
    • 제31권12호
    • /
    • pp.719-726
    • /
    • 2021
  • Through the process of chemical vapor deposition, Tungsten Hexafluoride (WF6) is widely used by the semiconductor industry to form tungsten films. Tungsten Hexafluoride (WF6) is produced through manufacturing processes such as pulverization, wet smelting, calcination and reduction of tungsten ores. The manufacturing process of Tungsten Hexafluoride (WF6) is required thorough quality control to improve productivity. In this paper, a real-time detection system for oxidation defects that occur in the manufacturing process of Tungsten Hexafluoride (WF6) is proposed. The proposed system is implemented by applying YOLOv5 based on Convolutional Neural Network (CNN); it is expected to enable more stable management than existing management, which relies on skilled workers. The implementation method of the proposed system and the results of performance comparison are presented to prove the feasibility of the method for improving the efficiency of the WF6 manufacturing process in this paper. The proposed system applying YOLOv5s, which is the most suitable material in the actual production environment, demonstrates high accuracy (mAP@0.5 99.4 %) and real-time detection speed (FPS 46).

SDCN: Synchronized Depthwise Separable Convolutional Neural Network for Single Image Super-Resolution

  • Muhammad, Wazir;Hussain, Ayaz;Shah, Syed Ali Raza;Shah, Jalal;Bhutto, Zuhaibuddin;Thaheem, Imdadullah;Ali, Shamshad;Masrour, Salman
    • International Journal of Computer Science & Network Security
    • /
    • 제21권11호
    • /
    • pp.17-22
    • /
    • 2021
  • Recently, image super-resolution techniques used in convolutional neural networks (CNN) have led to remarkable performance in the research area of digital image processing applications and computer vision tasks. Convolutional layers stacked on top of each other can design a more complex network architecture, but they also use more memory in terms of the number of parameters and introduce the vanishing gradient problem during training. Furthermore, earlier approaches of single image super-resolution used interpolation technique as a pre-processing stage to upscale the low-resolution image into HR image. The design of these approaches is simple, but not effective and insert the newer unwanted pixels (noises) in the reconstructed HR image. In this paper, authors are propose a novel single image super-resolution architecture based on synchronized depthwise separable convolution with Dense Skip Connection Block (DSCB). In addition, unlike existing SR methods that only rely on single path, but our proposed method used the synchronizes path for generating the SISR image. Extensive quantitative and qualitative experiments show that our method (SDCN) achieves promising improvements than other state-of-the-art methods.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.