• Title/Summary/Keyword: VGGNet

Search Result 43, Processing Time 0.027 seconds

Contactless Palmprint Identification Using the Pretrained VGGNet Model (사전 학습된 VGGNet 모델을 이용한 비접촉 장문 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1439-1447
    • /
    • 2018
  • Palm image acquisition without contact has advantages in user convenience and hygienic issues, but such images generally display more image variations than those acquired employing a contact plate or pegs. Therefore, it is necessary to develop a palmprint identification method which is robust to affine variations. This study proposes a deep learning approach which can effectively identify contactless palmprints. In general, it is very difficult to collect enough volume of palmprint images for training a deep convolutional neural network(DCNN). So we adopted an approach to use a pretrained DCNN. We designed two new DCNNs based on the VGGNet. One combines the VGGNet with SVM. The other add a shallow network on the middle-level of the VGGNet. The experimental results with two public palmprint databases show that the proposed method performs well not only contact-based palmprints but also contactless palmprints.

A Study on the Outlet Blockage Determination Technology of Conveyor System using Deep Learning

  • Jeong, Eui-Han;Suh, Young-Joo;Kim, Dong-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.11-18
    • /
    • 2020
  • This study proposes a technique for the determination of outlet blockage using deep learning in a conveyor system. The proposed method aims to apply the best model to the actual process, where we train various CNN models for the determination of outlet blockage using images collected by CCTV in an industrial scene. We used the well-known CNN model such as VGGNet, ResNet, DenseNet and NASNet, and used 18,000 images collected by CCTV for model training and performance evaluation. As a experiment result with various models, VGGNet showed the best performance with 99.03% accuracy and 29.05ms processing time, and we confirmed that VGGNet is suitable for the determination of outlet blockage.

Anomaly Detection using VGGNet for safety inspection of OPGW (광섬유 복합가공 지선(OPGW) 설비 안전점검을 위한 VGGNet 기반의 이상 탐지)

  • Kang, Gun-Ha;Sohn, Jung-Mo;Son, Do-Hyun;Han, Jeong-Ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.3-5
    • /
    • 2022
  • 본 연구는 VGGNet을 사용하여 광섬유 복합가공 지선 설비의 양/불량 판별을 수행한다. 광섬유 복합가공 지선이란, 전력선의 보호 및 전력 시설 간 통신을 담당하는 중요 설비로 고장 발생 전, 결함의 조기 발견 및 유지 관리가 중요하다. 현재 한국전력공사에서는 드론에서 촬영된 영상을 점검원이 이상 여부를 점검하는 방식이 주로 사용되고 있으나 이는 점검원의 숙련도, 경험에 따른 정확성 및 비용과 시간 측면에서 한계를 지니고 있다. 본 연구는 드론에서 촬영된 영상으로 VGGNet 기반의 양/불량 판정을 수행했다. 그 결과, 정확도 약 95.15%, 정밀도 약 96%, 재현율 약 95%, f1 score 약 95%의 성능을 확인하였다. 결과 확인 방법으로는 설명 가능한 인공지능(XAI) 알고리즘 중 하나인 Grad-CAM을 적용하였다. 이러한 광섬유 복합가공 지선 설비의 양/불량 판별은 점검원의 단순 작업에 대한 비용 및 점검 시간을 줄이며, 부가가치가 높은 업무에 집중할 수 있게 해준다. 또한, 고장 결함 발견에 있어서 객관적인 점검을 수행하기 때문에 일정한 점검 품질을 유지한다는 점에서 적용 가치가 있다.

  • PDF

Effectiveness of the Detection of Pulmonary Emphysema using VGGNet with Low-dose Chest Computed Tomography Images (저선량 흉부 CT를 이용한 VGGNet 폐기종 검출 유용성 평가)

  • Kim, Doo-Bin;Park, Young-Joon;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.411-417
    • /
    • 2022
  • This study aimed to learn and evaluate the effectiveness of VGGNet in the detection of pulmonary emphysema using low-dose chest computed tomography images. In total, 8000 images with normal findings and 3189 images showing pulmonary emphysema were used. Furthermore, 60%, 24%, and 16% of the normal and emphysema data were randomly assigned to training, validation, and test datasets, respectively, in model learning. VGG16 and VGG19 were used for learning, and the accuracy, loss, confusion matrix, precision, recall, specificity, and F1-score were evaluated. The accuracy and loss for pulmonary emphysema detection of the low-dose chest CT test dataset were 92.35% and 0.21% for VGG16 and 95.88% and 0.09% for VGG19, respectively. The precision, recall, and specificity were 91.60%, 98.36%, and 77.08% for VGG16 and 96.55%, 97.39%, and 92.72% for VGG19, respectively. The F1-scores were 94.86% and 96.97% for VGG16 and VGG19, respectively. Through the above evaluation index, VGG19 is judged to be more useful in detecting pulmonary emphysema. The findings of this study would be useful as basic data for the research on pulmonary emphysema detection models using VGGNet and artificial neural networks.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

A Study on the Defect Detection of Fabrics using Deep Learning (딥러닝을 이용한 직물의 결함 검출에 관한 연구)

  • Eun Su Nam;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.92-98
    • /
    • 2022
  • Identifying defects in textiles is a key procedure for quality control. This study attempted to create a model that detects defects by analyzing the images of the fabrics. The models used in the study were deep learning-based VGGNet and ResNet, and the defect detection performance of the two models was compared and evaluated. The accuracy of the VGGNet and the ResNet model was 0.859 and 0.893, respectively, which showed the higher accuracy of the ResNet. In addition, the region of attention of the model was derived by using the Grad-CAM algorithm, an eXplainable Artificial Intelligence (XAI) technique, to find out the location of the region that the deep learning model recognized as a defect in the fabric image. As a result, it was confirmed that the region recognized by the deep learning model as a defect in the fabric was actually defective even with the naked eyes. The results of this study are expected to reduce the time and cost incurred in the fabric production process by utilizing deep learning-based artificial intelligence in the defect detection of the textile industry.

Implementation of the Stone Classification with AI Algorithm Based on VGGNet Neural Networks (VGGNet을 활용한 석재분류 인공지능 알고리즘 구현)

  • Choi, Kyung Nam
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.32-38
    • /
    • 2021
  • Image classification through deep learning on the image from photographs has been a very active research field for the past several years. In this paper, we propose a method of automatically discriminating stone images from domestic source through deep learning, which is to use Python's hash library to scan 300×300 pixel photo images of granites such as Hwangdeungseok, Goheungseok, and Pocheonseok, performing data preprocessing to create learning images by examining duplicate images for each stone, removing duplicate images with the same hash value as a result of the inspection, and deep learning by stone. In addition, to utilize VGGNet, the size of the images for each stone is resized to 224×224 pixels, learned in VGG16 where the ratio of training and verification data for learning is 80% versus 20%. After training of deep learning, the loss function graph and the accuracy graph were generated, and the prediction results of the deep learning model were output for the three kinds of stone images.

Comparison of CNN Structures for Detection of Surface Defects (표면 결함 검출을 위한 CNN 구조의 비교)

  • Choi, Hakyoung;Seo, Kisung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1100-1104
    • /
    • 2017
  • A detector-based approach shows the limited performances for the defect inspections such as shallow fine cracks and indistinguishable defects from background. Deep learning technique is widely used for object recognition and it's applications to detect defects have been gradually attempted. Deep learning requires huge scale of learning data, but acquisition of data can be limited in some industrial application. The possibility of applying CNN which is one of the deep learning approaches for surface defect inspection is investigated for industrial parts whose detection difficulty is challenging and learning data is not sufficient. VOV is adopted for pre-processing and to obtain a resonable number of ROIs for a data augmentation. Then CNN method is applied for the classification. Three CNN networks, AlexNet, VGGNet, and mofified VGGNet are compared for experiments of defects detection.

A Deep Approach for Classifying Artistic Media from Artworks

  • Yang, Heekyung;Min, Kyungha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2558-2573
    • /
    • 2019
  • We present a deep CNN-based approach for classifying artistic media from artwork images. We aim to classify most frequently used artistic media including oilpaint brush, watercolor brush, pencil and pastel, etc. For this purpose, we extend VGGNet, one of the most widely used CNN structure, by substituting its last layer with a fully convolutional layer, which reveals class activation map (CAM), the region of classification. We build two artwork image datasets: YMSet that collects more than 4K artwork images for four most frequently used artistic media from various internet websites and WikiSet that collects almost 9K artwork images for ten most frequently used media from WikiArt. We execute a human baseline experiment to compare the classification performance. Through our experiments, we conclude that our classifier is superior in classifying artistic media to human.

A Comparative Study on Deep Learning Models for Scaffold Defect Detection (인공지지체 불량 검출을 위한 딥러닝 모델 성능 비교에 관한 연구)

  • Lee, Song-Yeon;Huh, Yong Jeong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.2
    • /
    • pp.109-114
    • /
    • 2021
  • When we inspect scaffold defect using sight, inspecting performance is decrease and inspecting time is increase. We need for automatically scaffold defect detection method to increase detection accuracy and reduce detection times. In this paper. We produced scaffold defect classification models using densenet, alexnet, vggnet algorithms based on CNN. We photographed scaffold using multi dimension camera. We learned scaffold defect classification model using photographed scaffold images. We evaluated the scaffold defect classification accuracy of each models. As result of evaluation, the defect classification performance using densenet algorithm was at 99.1%. The defect classification performance using VGGnet algorithm was at 98.3%. The defect classification performance using Alexnet algorithm was at 96.8%. We were able to quantitatively compare defect classification performance of three type algorithms based on CNN.