• Title/Summary/Keyword: COCO Dataset

Search Result 21, Processing Time 0.028 seconds

Construction and Effectiveness Evaluation of Multi Camera Dataset Specialized for Autonomous Driving in Domestic Road Environment (국내 도로 환경에 특화된 자율주행을 위한 멀티카메라 데이터 셋 구축 및 유효성 검증)

  • Lee, Jin-Hee;Lee, Jae-Keun;Park, Jaehyeong;Kim, Je-Seok;Kwon, Soon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.273-280
    • /
    • 2022
  • Along with the advancement of deep learning technology, securing high-quality dataset for verification of developed technology is emerging as an important issue, and developing robust deep learning models to the domestic road environment is focused by many research groups. Especially, unlike expressways and automobile-only roads, in the complex city driving environment, various dynamic objects such as motorbikes, electric kickboards, large buses/truck, freight cars, pedestrians, and traffic lights are mixed in city road. In this paper, we built our dataset through multi camera-based processing (collection, refinement, and annotation) including the various objects in the city road and estimated quality and validity of our dataset by using YOLO-based model in object detection. Then, quantitative evaluation of our dataset is performed by comparing with the public dataset and qualitative evaluation of it is performed by comparing with experiment results using open platform. We generated our 2D dataset based on annotation rules of KITTI/COCO dataset, and compared the performance with the public dataset using the evaluation rules of KITTI/COCO dataset. As a result of comparison with public dataset, our dataset shows about 3 to 53% higher performance and thus the effectiveness of our dataset was validated.

A study on object distance measurement using OpenCV-based YOLOv5

  • Kim, Hyun-Tae;Lee, Sang-Hyun
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.298-304
    • /
    • 2021
  • Currently, to prevent the spread of COVID-19 virus infection, gathering of more than 5 people in the same space is prohibited. The purpose of this paper is to measure the distance between objects using the Yolov5 model for processing real-time images with OpenCV in order to restrict the distance between several people in the same space. Also, Utilize Euclidean distance calculation method in DeepSORT and OpenCV to minimize occlusion. In this paper, to detect the distance between people, using the open-source COCO dataset is used for learning. The technique used here is using the YoloV5 model to measure the distance, utilizing DeepSORT and Euclidean techniques to minimize occlusion, and the method of expressing through visualization with OpenCV to measure the distance between objects is used. Because of this paper, the proposed distance measurement method showed good results for an image with perspective taken from a higher position than the object in order to calculate the distance between objects by calculating the y-axis of the image.

Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention (특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S)

  • Hwang, Beom-Yeon;Lee, Sang-Hun;Lee, Seung-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.31-37
    • /
    • 2021
  • In this paper proposed a feature fusion and spatial attention-based modified YOLOv4S for small and occluded detection. Conventional YOLOv4S is a lightweight network and lacks feature extraction capability compared to the method of the deep network. The proposed method first combines feature maps of different scales with feature fusion to enhance semantic and low-level information. In addition expanding the receptive field with dilated convolution, the detection accuracy for small and occluded objects was improved. Second by improving the conventional spatial information with spatial attention, the detection accuracy of objects classified and occluded between objects was improved. PASCAL VOC and COCO datasets were used for quantitative evaluation of the proposed method. The proposed method improved mAP by 2.7% in the PASCAL VOC dataset and 1.8% in the COCO dataset compared to the Conventional YOLOv4S.

Study on Image Processing Techniques Applying Artificial Intelligence-based Gray Scale and RGB scale

  • Lee, Sang-Hyun;Kim, Hyun-Tae
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.252-259
    • /
    • 2022
  • Artificial intelligence is used in fusion with image processing techniques using cameras. Image processing technology is a technology that processes objects in an image received from a camera in real time, and is used in various fields such as security monitoring and medical image analysis. If such image processing reduces the accuracy of recognition, providing incorrect information to medical image analysis, security monitoring, etc. may cause serious problems. Therefore, this paper uses a mixture of YOLOv4-tiny model and image processing algorithm and uses the COCO dataset for learning. The image processing algorithm performs five image processing methods such as normalization, Gaussian distribution, Otsu algorithm, equalization, and gradient operation. For RGB images, three image processing methods are performed: equalization, Gaussian blur, and gamma correction proceed. Among the nine algorithms applied in this paper, the Equalization and Gaussian Blur model showed the highest object detection accuracy of 96%, and the gamma correction (RGB environment) model showed the highest object detection rate of 89% outdoors (daytime). The image binarization model showed the highest object detection rate at 89% outdoors (night).

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

Traffic Signal Recognition System Based on Color and Time for Visually Impaired

  • P. Kamakshi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.48-54
    • /
    • 2023
  • Nowadays, a blind man finds it very difficult to cross the roads. They should be very vigilant with every step they take. To resolve this problem, Convolutional Neural Networks(CNN) is a best method to analyse the data and automate the model without intervention of human being. In this work, a traffic signal recognition system is designed using CNN for the visually impaired. To provide a safe walking environment, a voice message is given according to light state and timer state at that instance. The developed model consists of two phases, in the first phase the CNN model is trained to classify different images captured from traffic signals. Common Objects in Context (COCO) labelled dataset is used, which includes images of different classes like traffic lights, bicycles, cars etc. The traffic light object will be detected using this labelled dataset with help of object detection model. The CNN model detects the color of the traffic light and timer displayed on the traffic image. In the second phase, from the detected color of the light and timer value a text message is generated and sent to the text-to-speech conversion model to make voice guidance for the blind person. The developed traffic light recognition model recognizes traffic light color and countdown timer displayed on the signal for safe signal crossing. The countdown timer displayed on the signal was not considered in existing models which is very useful. The proposed model has given accurate results in different scenarios when compared to other models.

Rotation-robust text localization technique using deep learning (딥러닝 기반의 회전에 강인한 텍스트 검출 기법)

  • Choi, In-Kyu;Kim, Jewoo;Song, Hyok;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.80-81
    • /
    • 2019
  • 본 논문에서는 자연스러운 장면 영상에서 임의의 방향성을 가진 텍스트를 검출하기 위한 기법을 제안한다. 텍스트 검출을 위한 기본적인 프레임 워크는 Faster R-CNN[1]을 기반으로 한다. 먼저 RPN(Region Proposal Network)을 통해 다른 방향성을 가진 텍스트를 포함하는 bounding box를 생성한다. 이어서 RPN에서 생성한 각각의 bounding box에 대해 세 가지의 서로 다른 크기로 pooling된 특징지도를 추출하고 병합한다. 병합한 특징지도에서 텍스트와 텍스트가 아닌 대상에 대한 score, 정렬된 bounding box 좌표, 기울어진 bounding box 좌표를 모두 예측한다. 마지막으로 NMS(Non-Maximum Suppression)을 이용하여 검출 결과를 획득한다. COCO Text 2017 dataset[2]을 이용하여 학습 및 테스트를 진행하였으며 주관적으로 평가한 결과 기울어진 텍스트에 적합하게 회전된 영역을 얻을 수 있음을 확인하였다.

  • PDF

Efficient Super-Resolution of 2D Smoke Data with Optimized Quadtree (최적화된 쿼드트리를 이용한 2차원 연기 데이터의 효율적인 슈퍼 해상도 기법)

  • Choe, YooYeon;Kim, Donghui;Kim, Jong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.261-264
    • /
    • 2021
  • 본 논문에서는 SR(Super-Resolution)을 계산하는데 필요한 데이터를 효율적으로 분류하고 분할하여 빠르게 SR연산을 가능하게 하는 쿼드트리 기반 최적화 기법을 제안한다. 제안하는 방법은 입력 데이터로 사용하는 연기 데이터를 다운스케일링(Downscaling)하여 쿼드트리 연산 소요 시간을 감소시키며, 이때 연기의 밀도를 이진화함으로써, 다운스케일링 과정에서 밀도가 손실되는 문제를 피한다. 학습에 사용된 데이터는 COCO 2017 Dataset이며, 인공신경망은 VGG19 기반 네트워크를 사용한다. 컨볼루션 계층을 거칠 때 데이터의 손실을 막기 위해 잔차(Residual)방식과 유사하게 이전 계층의 출력 값을 더해주며 학습한다. 결과적으로 제안하는 방법은 이전 결과 기법에 비해 약15~18배 정도의 속도향상을 얻었다.

  • PDF