• Title/Summary/Keyword: Computer Vision

Search Result 2,232, Processing Time 0.025 seconds

Comparison of Artificial Intelligence Multitask Performance using Object Detection and Foreground Image (물체탐색과 전경영상을 이용한 인공지능 멀티태스크 성능 비교)

  • Jeong, Min Hyuk;Kim, Sang-Kyun;Lee, Jin Young;Choo, Hyon-Gon;Lee, HeeKyung;Cheong, Won-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.308-317
    • /
    • 2022
  • Researches are underway to efficiently reduce the size of video data transmitted and stored in the image analysis process using deep learning-based machine vision technology. MPEG (Moving Picture Expert Group) has newly established a standardization project called VCM (Video Coding for Machine) and is conducting research on video encoding for machines rather than video encoding for humans. We are researching a multitask that performs various tasks with one image input. The proposed pipeline does not perform all object detection of each task that should precede object detection, but precedes it only once and uses the result as an input for each task. In this paper, we propose a pipeline for efficient multitasking and perform comparative experiments on compression efficiency, execution time, and result accuracy of the input image to check the efficiency. As a result of the experiment, the capacity of the input image decreased by more than 97.5%, while the accuracy of the result decreased slightly, confirming the possibility of efficient multitasking.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

A Study on the Application of Object Detection Method in Construction Site through Real Case Analysis (사례분석을 통한 객체검출 기술의 건설현장 적용 방안에 관한 연구)

  • Lee, Kiseok;Kang, Sungwon;Shin, Yoonseok
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.2
    • /
    • pp.269-279
    • /
    • 2022
  • Purpose: The purpose of this study is to develop a deep learning-based personal protective equipment detection model for disaster prevention at construction sites, and to apply it to actual construction sites and to analyze the results. Method: In the method of conducting this study, the dataset on the real environment was constructed and the developed personal protective equipment(PPE) detection model was applied. The PPE detection model mainly consists of worker detection and PPE classification model.The worker detection model uses a deep learning-based algorithm to build a dataset obtained from the actual field to learn and detect workers, and the PPE classification model applies the PPE detection algorithm learned from the worker detection area extracted from the work detection model. For verification of the proposed model, experimental results were derived from data obtained from three construction sites. Results: The application of the PPE recognition model to construction site brings up the problems related to mis-recognition and non-recognition. Conclusions: The analysis outcomes were produced to apply the object recognition technology to a construction site, and the need for follow-up research was suggested through representative cases of worker recognition and non-recognition, and mis-recognition of personal protective equipment.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Detection of Marine Oil Spills from PlanetScope Images Using DeepLabV3+ Model (DeepLabV3+ 모델을 이용한 PlanetScope 영상의 해상 유출유 탐지)

  • Kang, Jonggu;Youn, Youjeong;Kim, Geunah;Park, Ganghyun;Choi, Soyeon;Yang, Chan-Su;Yi, Jonghyuk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1623-1631
    • /
    • 2022
  • Since oil spills can be a significant threat to the marine ecosystem, it is necessary to obtain information on the current contamination status quickly to minimize the damage. Satellite-based detection of marine oil spills has the advantage of spatiotemporal coverage because it can monitor a wide area compared to aircraft. Due to the recent development of computer vision and deep learning, marine oil spill detection can also be facilitated by deep learning. Unlike the existing studies based on Synthetic Aperture Radar (SAR) images, we conducted a deep learning modeling using PlanetScope optical satellite images. The blind test of the DeepLabV3+ model for oil spill detection showed the performance statistics with an accuracy of 0.885, a precision of 0.888, a recall of 0.886, an F1-score of 0.883, and a Mean Intersection over Union (mIOU) of 0.793.

SAAnnot-C3Pap: Ground Truth Collection Technique of Playing Posture Using Semi Automatic Annotation Method (SAAnnot-C3Pap: 반자동 주석화 방법을 적용한 연주 자세의 그라운드 트루스 수집 기법)

  • Park, So-Hyun;Kim, Seo-Yeon;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.409-418
    • /
    • 2022
  • In this paper, we propose SAAnnot-C3Pap, a semi-automatic annotation method for obtaining ground truth of a player's posture. In order to obtain ground truth about the two-dimensional joint position in the existing music domain, openpose, a two-dimensional posture estimation method, was used or manually labeled. However, automatic annotation methods such as the existing openpose have the disadvantages of showing inaccurate results even though they are fast. Therefore, this paper proposes SAAnnot-C3Pap, a semi-automated annotation method that is a compromise between the two. The proposed approach consists of three main steps: extracting postures using openpose, correcting the parts with errors among the extracted parts using supervisely, and then analyzing the results of openpose and supervisely. Perform the synchronization process. Through the proposed method, it was possible to correct the incorrect 2D joint position detection result that occurred in the openpose, solve the problem of detecting two or more people, and obtain the ground truth in the playing posture. In the experiment, we compare and analyze the results of the semi-automated annotation method openpose and the SAAnnot-C3Pap proposed in this paper. As a result of comparison, the proposed method showed improvement of posture information incorrectly collected through openpose.

A study on the improvement of artificial intelligence-based Parking control system to prevent vehicle access with fake license plates (위조번호판 부착 차량 출입 방지를 위한 인공지능 기반의 주차관제시스템 개선 방안)

  • Jang, Sungmin;Iee, Jeongwoo;Park, Jonghyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.57-74
    • /
    • 2022
  • Recently, artificial intelligence parking control systems have increased the recognition rate of vehicle license plates using deep learning, but there is a problem that they cannot determine vehicles with fake license plates. Despite these security problems, several institutions have been using the existing system so far. For example, in an experiment using a counterfeit license plate, there are cases of successful entry into major government agencies. This paper proposes an improved system over the existing artificial intelligence parking control system to prevent vehicles with such fake license plates from entering. The proposed method is to use the degree of matching of the front feature points of the vehicle as a passing criterion using the ORB algorithm that extracts information on feature points characterized by an image, just as the existing system uses the matching of vehicle license plates as a passing criterion. In addition, a procedure for checking whether a vehicle exists inside was included in the proposed system to prevent the entry of the same type of vehicle with a fake license plate. As a result of the experiment, it showed the improved performance in identifying vehicles with fake license plates compared to the existing system. These results confirmed that the methods proposed in this paper could be applied to the existing parking control system while taking the flow of the original artificial intelligence parking control system to prevent vehicles with fake license plates from entering.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

Semantic Segmentation for Multiple Concrete Damage Based on Hierarchical Learning (계층적 학습 기반 다중 콘크리트 손상에 대한 의미론적 분할)

  • Shim, Seungbo;Min, Jiyoung
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.6
    • /
    • pp.175-181
    • /
    • 2022
  • The condition of infrastructure deteriorates as the service life increases. Since most infrastructure in South Korea were intensively built during the period of economic growth, the proportion of outdated infrastructure is rapidly increasing now. Aging of such infrastructure can lead to safety accidents and even human casualties. To prevent these issues in advance, periodic and accurate inspection is essential. For this reason, the need for research to detect various types of damage using computer vision and deep learning is increasingly required in the field of remotely controlled or autonomous inspection. To this end, this study proposed a neural network structure that can detect concrete damage by classifying it into three types. In particular, the proposed neural network can detect them more accurately through a hierarchical learning technique. This neural network was trained with 2,026 damage images and tested with 508 damage images. As a result, we completed an algorithm with average mean intersection over union of 67.04% and F1 score of 52.65%. It is expected that the proposed damage detection algorithm could apply to accurate facility condition diagnosis in the near future.