• Title/Summary/Keyword: convolutional network

Search Result 1,639, Processing Time 0.026 seconds

Performance Enhancement Algorithm using Supervised Learning based on Background Object Detection for Road Surface Damage Detection (도로 노면 파손 탐지를 위한 배경 객체 인식 기반의 지도 학습을 활용한 성능 향상 알고리즘)

  • Shim, Seungbo;Chun, Chanjun;Ryu, Seung-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.3
    • /
    • pp.95-105
    • /
    • 2019
  • In recent years, image processing techniques for detecting road surface damaged spot have been actively researched. Especially, it is mainly used to acquire images through a smart phone or a black box that can be mounted in a vehicle and recognize the road surface damaged region in the image using several algorithms. In addition, in conjunction with the GPS module, the exact damaged location can be obtained. The most important technology is image processing algorithm. Recently, algorithms based on artificial intelligence have been attracting attention as research topics. In this paper, we will also discuss artificial intelligence image processing algorithms. Among them, an object detection method based on an region-based convolution neural networks method is used. To improve the recognition performance of road surface damage objects, 600 road surface damaged images and 1500 general road driving images are added to the learning database. Also, supervised learning using background object recognition method is performed to reduce false alarm and missing rate in road surface damage detection. As a result, we introduce a new method that improves the recognition performance of the algorithm to 8.66% based on average value of mAP through the same test database.

Recognition of Flat Type Signboard using Deep Learning (딥러닝을 이용한 판류형 간판의 인식)

  • Kwon, Sang Il;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.219-231
    • /
    • 2019
  • The specifications of signboards are set for each type of signboards, but the shape and size of the signboard actually installed are not uniform. In addition, because the colors of the signboard are not defined, so various colors are applied to the signboard. Methods for recognizing signboards can be thought of as similar methods of recognizing road signs and license plates, but due to the nature of the signboards, there are limitations in that the signboards can not be recognized in a way similar to road signs and license plates. In this study, we proposed a methodology for recognizing plate-type signboards, which are the main targets of illegal and old signboards, and automatically extracting areas of signboards, using the deep learning-based Faster R-CNN algorithm. The process of recognizing flat type signboards through signboard images captured by using smartphone cameras is divided into two sequences. First, the type of signboard was recognized using deep learning to recognize flat type signboards in various types of signboard images, and the result showed an accuracy of about 71%. Next, when the boundary recognition algorithm for the signboards was applied to recognize the boundary area of the flat type signboard, the boundary of flat type signboard was recognized with an accuracy of 85%.

Effective Text Question Analysis for Goal-oriented Dialogue (목적 지향 대화를 위한 효율적 질의 의도 분석에 관한 연구)

  • Kim, Hakdong;Go, Myunghyun;Lim, Heonyeong;Lee, Yurim;Jee, Minkyu;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.48-57
    • /
    • 2019
  • The purpose of this study is to understand the intention of the inquirer from the single text type question in Goal-oriented dialogue. Goal-Oriented Dialogue system means a dialogue system that satisfies the user's specific needs via text or voice. The intention analysis process is a step of analysing the user's intention of inquiry prior to the answer generation, and has a great influence on the performance of the entire Goal-Oriented Dialogue system. The proposed model was used for a daily chemical products domain and Korean text data related to the domain was used. The analysis is divided into a speech-act which means independent on a specific field concept-sequence and which means depend on a specific field. We propose a classification method using the word embedding model and the CNN as a method for analyzing speech-act and concept-sequence. The semantic information of the word is abstracted through the word embedding model, and concept-sequence and speech-act classification are performed through the CNN based on the semantic information of the abstract word.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

A Quality Prediction Model for Ginseng Sprouts based on CNN (CNN을 활용한 새싹삼의 품질 예측 모델 개발)

  • Lee, Chung-Gu;Jeong, Seok-Bong
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.2
    • /
    • pp.41-48
    • /
    • 2021
  • As the rural population continues to decline and aging, the improvement of agricultural productivity is becoming more important. Early prediction of crop quality can play an important role in improving agricultural productivity and profitability. Although many researches have been conducted recently to classify diseases and predict crop yield using CNN based deep learning and transfer learning technology, there are few studies which predict postharvest crop quality early in the planting stage. In this study, a early quality prediction model is proposed for sprout ginseng, which is drawing attention as a healthy functional foods. For this end, we took pictures of ginseng seedlings in the planting stage and cultivated them through hydroponic cultivation. After harvest, quality data were labeled by classifying the quality of ginseng sprout. With this data, we build early quality prediction models using several pre-trained CNN models through transfer learning technology. And we compare the prediction performance such as learning period and accuracy between each model. The results show more than 80% prediction accuracy in all proposed models, especially ResNet152V2 based model shows the highest accuracy. Through this study, it is expected that it will be able to contribute to production and profitability by automating the existing seedling screening works, which primarily rely on manpower.

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Construction Method of ECVAM using Land Cover Map and KOMPSAT-3A Image (토지피복지도와 KOMPSAT-3A위성영상을 활용한 환경성평가지도의 구축)

  • Kwon, Hee Sung;Song, Ah Ram;Jung, Se Jung;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.367-380
    • /
    • 2022
  • In this study, the periodic and simplified update and production way of the ECVAM (Environmental Conservation Value Assessment Map) was presented through the classification of environmental values using KOMPSAT-3A satellite imagery and land cover map. ECVAM is a map that evaluates the environmental value of the country in five stages based on 62 legal evaluation items and 8 environmental and ecological evaluation items, and is provided on two scales: 1:25000 and 1:5000. However, the 1:5000 scale environmental assessment map is being produced and serviced with a slow renewal cycle of one year due to various constraints such as the absence of reference materials and different production years. Therefore, in this study, one of the deep learning techniques, KOMPSAT-3A satellite image, SI (Spectral Indices), and land cover map were used to conduct this study to confirm the possibility of establishing an environmental assessment map. As a result, the accuracy was calculated to be 87.25% and 85.88%, respectively. Through the results of the study, it was possible to confirm the possibility of constructing an environmental assessment map using satellite imagery, optical index, and land cover classification.

Automatic Extraction of Training Data Based on Semi-supervised Learning for Time-series Land-cover Mapping (시계열 토지피복도 제작을 위한 준감독학습 기반의 훈련자료 자동 추출)

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.461-469
    • /
    • 2022
  • This paper presents a novel training data extraction approach using semi-supervised learning (SSL)-based classification without the analyst intervention for time-series land-cover mapping. The SSL-based approach first performs initial classification using initial training data obtained from past images including land-cover characteristics similar to the image to be classified. Reliable training data from the initial classification result are then extracted from SSL-based iterative classification using classification uncertainty information and class labels of neighboring pixels as constraints. The potential of the SSL-based training data extraction approach was evaluated from a classification experiment using unmanned aerial vehicle images in croplands. The use of new training data automatically extracted by the proposed SSL approach could significantly alleviate the misclassification in the initial classification result. In particular, isolated pixels were substantially reduced by considering spatial contextual information from adjacent pixels. Consequently, the classification accuracy of the proposed approach was similar to that of classification using manually extracted training data. These results indicate that the SSL-based iterative classification presented in this study could be effectively applied to automatically extract reliable training data for time-series land-cover mapping.