• 제목/요약/키워드: CNN(Convolution Neural Networks)

검색결과 78건 처리시간 0.024초

표면 결함 검출을 위한 CNN 구조의 비교 (Comparison of CNN Structures for Detection of Surface Defects)

  • 최학영;서기성
    • 전기학회논문지
    • /
    • 제66권7호
    • /
    • pp.1100-1104
    • /
    • 2017
  • A detector-based approach shows the limited performances for the defect inspections such as shallow fine cracks and indistinguishable defects from background. Deep learning technique is widely used for object recognition and it's applications to detect defects have been gradually attempted. Deep learning requires huge scale of learning data, but acquisition of data can be limited in some industrial application. The possibility of applying CNN which is one of the deep learning approaches for surface defect inspection is investigated for industrial parts whose detection difficulty is challenging and learning data is not sufficient. VOV is adopted for pre-processing and to obtain a resonable number of ROIs for a data augmentation. Then CNN method is applied for the classification. Three CNN networks, AlexNet, VGGNet, and mofified VGGNet are compared for experiments of defects detection.

CNN 모델을 활용한 콘크리트 균열 검출 및 시각화 방법 (Concrete Crack Detection and Visualization Method Using CNN Model)

  • 최주희;김영관;이한승
    • 한국건축시공학회:학술대회논문집
    • /
    • 한국건축시공학회 2022년도 봄 학술논문 발표대회
    • /
    • pp.73-74
    • /
    • 2022
  • Concrete structures occupy the largest proportion of modern infrastructure, and concrete structures often have cracking problems. Existing concrete crack diagnosis methods have limitations in crack evaluation because they rely on expert visual inspection. Therefore, in this study, we design a deep learning model that detects, visualizes, and outputs cracks on the surface of RC structures based on image data by using a CNN (Convolution Neural Networks) model that can process two- and three-dimensional data such as video and image data. do. An experimental study was conducted on an algorithm to automatically detect concrete cracks and visualize them using a CNN model. For the three deep learning models used for algorithm learning in this study, the concrete crack prediction accuracy satisfies 90%, and in particular, the 'InceptionV3'-based CNN model showed the highest accuracy. In the case of the crack detection visualization model, it showed high crack detection prediction accuracy of more than 95% on average for data with crack width of 0.2 mm or more.

  • PDF

CNN based Sound Event Detection Method using NMF Preprocessing in Background Noise Environment

  • Jang, Bumsuk;Lee, Sang-Hyun
    • International journal of advanced smart convergence
    • /
    • 제9권2호
    • /
    • pp.20-27
    • /
    • 2020
  • Sound event detection in real-world environments suffers from the interference of non-stationary and time-varying noise. This paper presents an adaptive noise reduction method for sound event detection based on non-negative matrix factorization (NMF). In this paper, we proposed a deep learning model that integrates Convolution Neural Network (CNN) with Non-Negative Matrix Factorization (NMF). To improve the separation quality of the NMF, it includes noise update technique that learns and adapts the characteristics of the current noise in real time. The noise update technique analyzes the sparsity and activity of the noise bias at the present time and decides the update training based on the noise candidate group obtained every frame in the previous noise reduction stage. Noise bias ranks selected as candidates for update training are updated in real time with discrimination NMF training. This NMF was applied to CNN and Hidden Markov Model(HMM) to achieve improvement for performance of sound event detection. Since CNN has a more obvious performance improvement effect, it can be widely used in sound source based CNN algorithm.

인공신경망의 연결압축에 대한 연구 (A Study on Compression of Connections in Deep Artificial Neural Networks)

  • 안희준
    • 한국산업정보학회논문지
    • /
    • 제22권5호
    • /
    • pp.17-24
    • /
    • 2017
  • 최근 딥러닝, 즉 거대 또는 깊은 인공신경망을 사용한 기술이 놀라운 성능을 보이고 있고, 점차로 그 네트워크의 규모가 커지고 있다. 하지만, 신경망 크기의 증가는 계산양의 증가로 이어져서 회로의 복잡성, 가격, 발열, 실시간성 제약 등의 문제를 야기한다. 또한, 신경망 연결에는 많은 중복성이 존재한다, 본 연구에서는 이 중복성을 효과적으로 제거하여 이용하여 원 신경망의 성능과 원하는 범위안의 차이를 보이면서, 네트워크 연결의 수를 줄이는 방법을 제안하고 실험하였다. 특히, 재학습에 의하여 성능을 향상시키고, 각 계층별 차이를 고려하기 위하여 계층별 오류율을 할당하여 원하는 성능을 보장할 수 있는 간단한 방법을 제안하였다. 대표적인 영상인식 신경망구조인 FCN (전연결) 구조와 CNN (컨벌루션 신경망) 구조에서 대하여 실험한 결과 약 1/10 정도의 연결만으로도 원 신경망과 유사한 성능을 보일 수 있음을 확인하였다.

2차원 변환과 CNN 딥러닝 기반 음향 인식 시스템에 관한 연구 (A Study on Sound Recognition System Based on 2-D Transformation and CNN Deep Learning)

  • 하태민;조성원;;;이기성
    • 스마트미디어저널
    • /
    • 제11권1호
    • /
    • pp.31-37
    • /
    • 2022
  • 본 논문은 일상생활에서 흔히 들을 수 있는 소리(비명소리, 박수 소리, 여러 명의 박수 소리, 자동차 지나가는 소리, 배경음 등)를 감지하는 음향 인식을 위하여, 신호처리 및 딥러닝을 적용하는 연구에 관한 것이다. 제안된 음향 인식에서는, 인식 정확도의 향상을 위해서 음향 파형의 스펙트럼, 음향 데이터의 증강, 2차원(2-D) 이미지 변환에 관한 기술들이 사용되었고, 예측의 정확도를 향상을 위한 앙상블 학습, Convolution Neural Network(CNN) 딥러닝 기술들이 적용된다. 제안된 음향 인식 기술은 실험을 통해 다양한 음향을 정확하게 인식할 수 있음을 보여준다.

Comparison of Fine-Tuned Convolutional Neural Networks for Clipart Style Classification

  • Lee, Seungbin;Kim, Hyungon;Seok, Hyekyoung;Nang, Jongho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제9권4호
    • /
    • pp.1-7
    • /
    • 2017
  • Clipart is artificial visual contents that are created using various tools such as Illustrator to highlight some information. Here, the style of the clipart plays a critical role in determining how it looks. However, previous studies on clipart are focused only on the object recognition [16], segmentation, and retrieval of clipart images using hand-craft image features. Recently, some clipart classification researches based on the style similarity using CNN have been proposed, however, they have used different CNN-models and experimented with different benchmark dataset so that it is very hard to compare their performances. This paper presents an experimental analysis of the clipart classification based on the style similarity with two well-known CNN-models (Inception Resnet V2 [13] and VGG-16 [14] and transfers learning with the same benchmark dataset (Microsoft Style Dataset 3.6K). From this experiment, we find out that the accuracy of Inception Resnet V2 is better than VGG for clipart style classification because of its deep nature and convolution map with various sizes in parallel. We also find out that the end-to-end training can improve the accuracy more than 20% in both CNN models.

객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘 (Bottleneck-based Siam-CNN Algorithm for Object Tracking)

  • 임수창;김종찬
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

홈보안 시스템을 위한 CNN 기반 2D와 2.5D 얼굴 인식 (CNN Based 2D and 2.5D Face Recognition For Home Security System)

  • ;김강철
    • 한국전자통신학회논문지
    • /
    • 제14권6호
    • /
    • pp.1207-1214
    • /
    • 2019
  • 4차 산업혁명의 기술이 우리도 모르는 사이 우리의 삶 속으로 스며들고 있다. CNN이 이미지 인식 분야에서 탁월한 능력을 보여준 이후 많은 IoT 기반 홈보안 시스템은 침입자로부터 가족과 가정을 보호하며 얼굴을 인식하기 위한 좋은 생체인식 방법으로 CNN을 사용하고 있다. 본 논문에서는 2D와 2.5D 이미지에 대하여 여러 종류의 입력 이미지 크기와 필터를 가지고 있는 CNN의 구조를 연구한다. 실험 결과는 50*50 크기를 가진 2.5D 입력 이미지, 2 컨벌류션과 맥스풀링 레이어, 3*3 필터를 가진 CNN 구조가 0.966의 인식률을 보여 주었고, 1개의 입력 이미지에 대하여 가장 긴 CPU 소비시간은 0.057S로 나타났다. 홈보안 시스템은 좋은 얼굴 인식률과 짧은 연산 시간을 요구하므로 본 논문에서 제안한 구조의 CNN은 홈보안 시스템에서 얼굴인식을 기반으로 하는 액추에이터 제어 등에 적합한 방법이 될 것이다.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권2호
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

Application of Convolutional Neural Networks (CNN) for Bias Correction of Satellite Precipitation Products (SPPs) in the Amazon River Basin

  • Alena Gonzalez Bevacqua;Xuan-Hien Le;Giha Lee
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.159-159
    • /
    • 2023
  • The Amazon River basin is one of the largest basins in the world, and its ecosystem is vital for biodiversity, hydrology, and climate regulation. Thus, understanding the hydrometeorological process is essential to the maintenance of the Amazon River basin. However, it is still tricky to monitor the Amazon River basin because of its size and the low density of the monitoring gauge network. To solve those issues, remote sensing products have been largely used. Yet, those products have some limitations. Therefore, this study aims to do bias corrections to improve the accuracy of Satellite Precipitation Products (SPPs) in the Amazon River basin. We use 331 rainfall stations for the observed data and two daily satellite precipitation gridded datasets (CHIRPS, TRMM). Due to the limitation of the observed data, the period of analysis was set from 1st January 1990 to 31st December 2010. The observed data were interpolated to have the same resolution as the SPPs data using the IDW method. For bias correction, we use convolution neural networks (CNN) combined with an autoencoder architecture (ConvAE). To evaluate the bias correction performance, we used some statistical indicators such as NSE, RMSE, and MAD. Hence, those results can increase the quality of precipitation data in the Amazon River basin, improving its monitoring and management.

  • PDF