• Title/Summary/Keyword: Real-time Object Classification

Search Result 74, Processing Time 0.025 seconds

Object Classification Method using Hilbert Scanning Distance (힐버트 스캔 거리값을 이용한 물체식별 알고리즘)

  • Choi, Jeong-Hwan;Baek, Young-Min;Choi, Jin-Young
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.4
    • /
    • pp.700-705
    • /
    • 2008
  • In this paper, we propose object classification algorithm for real-time surveillance system. We have approached this problem using silhouette-based template matching. The silhouette of the object is extracted, and then it is compared with representative template models. Template models are previously stored in the database. Our algorithm is similar to previous pixel-based template matching scheme like Hausdorff Distance, but we use 1D image array rather than 2D regions inspired by Hilbert Path. Transformation of images could reduce computational burden to compute similarity between the detected image and the template images. Experimental results show robustness and real-time performance in object classification, even in low resolution images.

Development of a Deep Learning Algorithm for Small Object Detection in Real-Time (실시간 기반 매우 작은 객체 탐지를 위한 딥러닝 알고리즘 개발)

  • Wooseong Yeo;Meeyoung Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.4_2
    • /
    • pp.1001-1007
    • /
    • 2024
  • Recent deep learning algorithms for object detection in real-time play a crucial role in various applications such as autonomous driving, traffic monitoring, health care, and water quality monitoring. The size of small objects, in particular, significantly impacts the accuracy of detection models. However, data containing small objects can lead to underfitting issues in models. Therefore, this study developed a deep learning model capable of quickly detecting small objects to provide more accurate predictions. The RE-SOD (Residual block based Small Object Detector) developed in this research enhances the detection performance for small objects by using RGB separation preprocessing and residual blocks. The model achieved an accuracy of 1.0 in image classification and an mAP50-95 score of 0.944 in object detection. The performance of this model was validated by comparing it with real-time detection models such as YOLOv5, YOLOv7, and YOLOv8.

Real-Time Object Segmentation in Image Sequences (연속 영상 기반 실시간 객체 분할)

  • Kang, Eui-Seon;Yoo, Seung-Hun
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.173-180
    • /
    • 2011
  • This paper shows an approach for real-time object segmentation on GPU (Graphics Processing Unit) using CUDA (Compute Unified Device Architecture). Recently, many applications that is monitoring system, motion analysis, object tracking or etc require real-time processing. It is not suitable for object segmentation to procedure real-time in CPU. NVIDIA provide CUDA platform for Parallel Processing for General Computation to upgrade limit of Hardware Graphic. In this paper, we use adaptive Gaussian Mixture Background Modeling in the step of object extraction and CCL(Connected Component Labeling) for classification. The speed of GPU and CPU is compared and evaluated with implementation in Core2 Quad processor with 2.4GHz.The GPU version achieved a speedup of 3x-4x over the CPU version.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

The Development of Surface Inspection System Using the Real-time Image Processing (실시간 영상처리를 이용한 표면흠검사기 개발)

  • 이종학;박창현;정진양
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.171-171
    • /
    • 2000
  • We have developed m innovative surface inspection system for automated quality control for steel products in POSCO. We had ever installed the various kinds of surface inspection systems, such as a linear CCD and a laser typed surface inspection systems at cold rolled strips production lines. But, these systems cannot fulfill the sufficient detection and classification rate, and real time processing performance. In order to increase detection and classification rate, we have used the Dark, Bright and Transition Field illumination and area type CCD camera, and fur the real time image processing, parallel computing has been used. In this paper, we introduced the automatic surface inspection system and real time image processing technique using the Object Detection, Defect Detection, Classification algorithms and its performance obtained at the production line.

  • PDF

Real-Time PTZ Camera with Detection and Classification Functionalities (검출과 분류기능이 탑재된 실시간 지능형 PTZ카메라)

  • Park, Jong-Hwa;Ahn, Tae-Ki;Jeon, Ji-Hye;Jo, Byung-Mok;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2C
    • /
    • pp.78-85
    • /
    • 2011
  • In this paper we proposed an intelligent PTZ camera system which detects, classifies and tracks moving objects. If a moving object is detected, features are extracted for classification and then realtime tracking follows. We used GMM for detection followed by shadow removal. Legendre moment is used for classification. Without auto focusing, we can control the PTZ camera movement by using center points of the image and object's direction, distance and velocity. To implement the realtime system, we used TI DM6446 Davinci processor. Throughout the experiment, we obtained system's high performance in classification and tracking both at vehicle's normal and high speed motion.

A Study on The Classification of Target-objects with The Deep-learning Model in The Vision-images (딥러닝 모델을 이용한 비전이미지 내의 대상체 분류에 관한 연구)

  • Cho, Youngjoon;Kim, Jongwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.20-25
    • /
    • 2021
  • The target-object classification method was implemented using a deep-learning-based detection model in real-time images. The object detection model was a deep-learning-based detection model that allowed extensive data collection and machine learning processes to classify similar target-objects. The recognition model was implemented by changing the processing structure of the detection model and combining developed the vision-processing module. To classify the target-objects, the identity and similarity were defined and applied to the detection model. The use of the recognition model in industry was also considered by verifying the effectiveness of the recognition model using the real-time images of an actual soccer game. The detection model and the newly constructed recognition model were compared and verified using real-time images. Furthermore, research was conducted to optimize the recognition model in a real-time environment.

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Real Time Hornet Classification System Based on Deep Learning (딥러닝을 이용한 실시간 말벌 분류 시스템)

  • Jeong, Yunju;Lee, Yeung-Hak;Ansari, Israfil;Lee, Cheol-Hee
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1141-1147
    • /
    • 2020
  • The hornet species are so similar in shape that they are difficult for non-experts to classify, and because the size of the objects is small and move fast, it is more difficult to detect and classify the species in real time. In this paper, we developed a system that classifies hornets species in real time based on a deep learning algorithm using a boundary box. In order to minimize the background area included in the bounding box when labeling the training image, we propose a method of selecting only the head and body of the hornet. It also experimentally compares existing boundary box-based object recognition algorithms to find the best algorithms that can detect wasps in real time and classify their species. As a result of the experiment, when the mish function was applied as the activation function of the convolution layer and the hornet images were tested using the YOLOv4 model with the Spatial Attention Module (SAM) applied before the object detection block, the average precision was 97.89% and the average recall was 98.69%.