• Title/Summary/Keyword: GoogLeNet

Search Result 40, Processing Time 0.03 seconds

Streamlined GoogLeNet Algorithm Based on CNN for Korean Character Recognition (한글 인식을 위한 CNN 기반의 간소화된 GoogLeNet 알고리즘 연구)

  • Kim, Yeon-gyu;Cha, Eui-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1657-1665
    • /
    • 2016
  • Various fields are being researched through Deep Learning using CNN(Convolutional Neural Network) and these researches show excellent performance in the image recognition. In this paper, we provide streamlined GoogLeNet of CNN architecture that is capable of learning a large-scale Korean character database. The experimental data used in this paper is PHD08 that is the large-scale of Korean character database. PHD08 has 2,187 samples for each character and there are 2,350 Korean characters that make total 5,139,450 sample data. As a training result, streamlined GoogLeNet showed over 99% of test accuracy at PHD08. Also, we made additional Korean character data that have fonts that are not in the PHD08 in order to ensure objectivity and we compared the performance of classification between streamlined GoogLeNet and other OCR programs. While other OCR programs showed a classification success rate of 66.95% to 83.16%, streamlined GoogLeNet showed 89.14% of the classification success rate that is higher than other OCR program's rate.

Variations of AlexNet and GoogLeNet to Improve Korean Character Recognition Performance

  • Lee, Sang-Geol;Sung, Yunsick;Kim, Yeon-Gyu;Cha, Eui-Young
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.205-217
    • /
    • 2018
  • Deep learning using convolutional neural networks (CNNs) is being studied in various fields of image recognition and these studies show excellent performance. In this paper, we compare the performance of CNN architectures, KCR-AlexNet and KCR-GoogLeNet. The experimental data used in this paper is obtained from PHD08, a large-scale Korean character database. It has 2,187 samples of each Korean character with 2,350 Korean character classes for a total of 5,139,450 data samples. In the training results, KCR-AlexNet showed an accuracy of over 98% for the top-1 test and KCR-GoogLeNet showed an accuracy of over 99% for the top-1 test after the final training iteration. We made an additional Korean character dataset with fonts that were not in PHD08 to compare the classification success rate with commercial optical character recognition (OCR) programs and ensure the objectivity of the experiment. While the commercial OCR programs showed 66.95% to 83.16% classification success rates, KCR-AlexNet and KCR-GoogLeNet showed average classification success rates of 90.12% and 89.14%, respectively, which are higher than the commercial OCR programs' rates. Considering the time factor, KCR-AlexNet was faster than KCR-GoogLeNet when they were trained using PHD08; otherwise, KCR-GoogLeNet had a faster classification speed.

Application of Convolution Neural Network to Flare Forecasting using solar full disk images

  • Yi, Kangwoo;Moon, Yong-Jae;Park, Eunsu;Shin, Seulki
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.60.1-60.1
    • /
    • 2017
  • In this study we apply Convolution Neural Network(CNN) to solar flare occurrence prediction with various parameter options using the 00:00 UT MDI images from 1996 to 2010 (total 4962 images). We assume that only X, M and C class flares correspond to "flare occurrence" and the others to "non-flare". We have attempted to look for the best options for the models with two CNN pre-trained models (AlexNet and GoogLeNet), by modifying training images and changing hyper parameters. Our major results from this study are as follows. First, the flare occurrence predictions are relatively good with about 80 % accuracies. Second, both flare prediction models based on AlexNet and GoogLeNet have similar results but AlexNet is faster than GoogLeNet. Third, modifying the training images to reduce the projection effect is not effective. Fourth, skill scores of our flare occurrence model are mostly better than those of the previous models.

  • PDF

Deep Convolution Neural Networks in Computer Vision: a Review

  • Yoo, Hyeon-Joong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.35-43
    • /
    • 2015
  • Over the past couple of years, tremendous progress has been made in applying deep learning (DL) techniques to computer vision. Especially, deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance on standard recognition datasets and tasks such as ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Among them, GoogLeNet network which is a radically redesigned DCNN based on the Hebbian principle and scale invariance set the new state of the art for classification and detection in the ILSVRC 2014. Since there exist various deep learning techniques, this review paper is focusing on techniques directly related to DCNNs, especially those needed to understand the architecture and techniques employed in GoogLeNet network.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

Comparative Experiment of Cloud Classification and Detection of Aerial Image by Deep Learning (딥러닝에 의한 항공사진 구름 분류 및 탐지 비교 실험)

  • Song, Junyoung;Won, Taeyeon;Jo, Su Min;Eo, Yang Dam;Park, So young;Shin, Sang ho;Park, Jin Sue;Kim, Changjae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.409-418
    • /
    • 2021
  • As the amount of construction for aerial photography increases, the need for automation of quality inspection is emerging. In this study, an experiment was performed to classify or detect clouds in aerial photos using deep learning techniques. Also, classification and detection were performed by including satellite images in the learning data. As algorithms used in the experiment, GoogLeNet, VGG16, Faster R-CNN and YOLOv3 were applied and the results were compared. In addition, considering the practical limitations of securing erroneous images including clouds in aerial images, we also analyzed whether additional learning of satellite images affects classification and detection accuracy in comparison a training dataset that only contains aerial images. As results, the GoogLeNet and YOLOv3 algorithms showed relatively superior accuracy in cloud classification and detection of aerial images, respectively. GoogLeNet showed producer's accuracy of 83.8% for cloud and YOLOv3 showed producer's accuracy of 84.0% for cloud. And, the addition of satellite image learning data showed that it can be applied as an alternative when there is a lack of aerial image data.

Comparative Learning based Deep Learning Algorithm for Abnormal Beat Detection using Imaged Electrocardiogram Signal (비정상심박 검출을 위해 영상화된 심전도 신호를 이용한 비교학습 기반 딥러닝 알고리즘)

  • Bae, Jinkyung;Kwak, Minsoo;Noh, Kyeungkap;Lee, Dongkyu;Park, Daejin;Lee, Seungmin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.30-40
    • /
    • 2022
  • Electrocardiogram (ECG) signal's shape and characteristic varies through each individual, so it is difficult to classify with one neural network. It is difficult to classify the given data directly, but if corresponding normal beat is given, it is relatively easy and accurate to classify the beat by comparing two beats. In this study, we classify the ECG signal by generating the reference normal beat through the template cluster, and combining with the input ECG signal. It is possible to detect abnormal beats of various individual's records with one neural network by learning and classifying with the imaged ECG beats which are combined with corresponding reference normal beat. Especially, various neural networks, such as GoogLeNet, ResNet, and DarkNet, showed excellent performance when using the comparative learning. Also, we can confirmed that GoogLeNet has 99.72% sensitivity, which is the highest performance of the three neural networks.

Architectures of Convolutional Neural Networks for the Prediction of Protein Secondary Structures (단백질 이차 구조 예측을 위한 합성곱 신경망의 구조)

  • Chi, Sang-Mun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.5
    • /
    • pp.728-733
    • /
    • 2018
  • Deep learning has been actively studied for predicting protein secondary structure based only on the sequence information of the amino acids constituting the protein. In this paper, we compared the performances of the convolutional neural networks of various structures to predict the protein secondary structure. To investigate the optimal depth of the layer of neural network for the prediction of protein secondary structure, the performance according to the number of layers was investigated. We also applied the structure of GoogLeNet and ResNet which constitute building blocks of many image classification methods. These methods extract various features from input data, and smooth the gradient transmission in the learning process even using the deep layer. These architectures of convolutional neural networks were modified to suit the characteristics of protein data to improve performance.

A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm (앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구)

  • Park, Sung-Wook;Kim, Jong-Chan;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.