• Title/Summary/Keyword: Real-time object recognition

Search Result 280, Processing Time 0.032 seconds

Object detection and distance measurement system with sensor fusion (센서 융합을 통한 물체 거리 측정 및 인식 시스템)

  • Lee, Tae-Min;Kim, Jung-Hwan;Lim, Joonhong
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.232-237
    • /
    • 2020
  • In this paper, we propose an efficient sensor fusion method for autonomous vehicle recognition and distance measurement. Typical sensors used in autonomous vehicles are radar, lidar and camera. Among these, the lidar sensor is used to create a map around the vehicle. This has the disadvantage, however, of poor performance in weather conditions and the high cost of the sensor. In this paper, to compensate for these shortcomings, the distance is measured with a radar sensor that is relatively inexpensive and free of snow, rain and fog. The camera sensor with excellent object recognition rate is fused to measure object distance. The converged video is transmitted to a smartphone in real time through an IP server and can be used for an autonomous driving assistance system that determines the current vehicle situation from inside and outside.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

A Study on the Real-time Recognition Methodology for IoT-based Traffic Accidents (IoT 기반 교통사고 실시간 인지방법론 연구)

  • Oh, Sung Hoon;Jeon, Young Jun;Kwon, Young Woo;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.15-27
    • /
    • 2022
  • In the past five years, the fatality rate of single-vehicle accidents has been 4.7 times higher than that of all accidents, so it is necessary to establish a system that can detect and respond to single-vehicle accidents immediately. The IoT(Internet of Thing)-based real-time traffic accident recognition system proposed in this study is as following. By attaching an IoT sensor which detects the impact and vehicle ingress to the guardrail, when an impact occurs to the guardrail, the image of the accident site is analyzed through artificial intelligence technology and transmitted to a rescue organization to perform quick rescue operations to damage minimization. An IoT sensor module that recognizes vehicles entering the monitoring area and detects the impact of a guardrail and an AI-based object detection module based on vehicle image data learning were implemented. In addition, a monitoring and operation module that imanages sensor information and image data in integrate was also implemented. For the validation of the system, it was confirmed that the target values were all met by measuring the shock detection transmission speed, the object detection accuracy of vehicles and people, and the sensor failure detection accuracy. In the future, we plan to apply it to actual roads to verify the validity using real data and to commercialize it. This system will contribute to improving road safety.

Real-time moving object tracking and distance measurement system using stereo camera (스테레오 카메라를 이용한 이동객체의 실시간 추적과 거리 측정 시스템)

  • Lee, Dong-Seok;Lee, Dong-Wook;Kim, Su-Dong;Kim, Tae-June;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.366-377
    • /
    • 2009
  • In this paper, we implement the real-time system which extracts 3-dimensional coordinates from right and left images captured by a stereo camera and provides users with reality through a virtual space operated by the 3-dimensional coordinates. In general, all pixels in correspondence region are compared for the disparity estimation. However, for a real time process, the central coordinates of the correspondence region are only used in the proposed algorithm. In the implemented system, 3D coordinates are obtained by using the depth information derived from the estimated disparity and we set user's hand as a region of interest(ROI). After user's hand is detected as the ROI, the system keeps tracking a hand's movement and generates a virtual space that is controled by the hand. Experimental results show that the implemented system could estimate the disparity in real -time and gave the mean-error less than 0.68cm within a range of distance, 1.5m. Also It had more than 90% accuracy in the hand recognition.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

AI-based incident handling using a black box (블랙박스를 활용한 AI 기반 사고처리)

  • Park, Gi-Won;Lee, Geon-woo;Yu, Junhyeok;Kim, Shin-Hyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.1188-1191
    • /
    • 2021
  • The function of the black box can be combined with a car to check the video through a cloud server, reduce the hassle of checking the video through a memory card, check the black box image in real time through a PC and smartphone, and check the user's Excel, brake operation status, and handle control record at the time of the accident. In addition, the goal was to accurately identify vehicle accidents and simplify accident handling through artificial intelligence object recognition of black box images using cloud services. Measures can be prepared to preserve images even if the black box itself loses, such as fire, flooding, or damage that occurs in an accident. It has been confirmed that the exact situation before and after the accident can be grasped immediately by providing object recognition and log recording functions under actual driving experimental conditions.

Algorithm on Detection and Measurement for Proximity Object based on the LiDAR Sensor (LiDAR 센서기반 근접물체 탐지계측 알고리즘)

  • Jeong, Jong-teak;Choi, Jo-cheon
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.192-197
    • /
    • 2020
  • Recently, the technologies related to autonomous drive has studying the goal for safe operation and prevent accidents of vehicles. There is radar and camera technologies has used to detect obstacles in these autonomous vehicle research. Now a day, the method for using LiDAR sensor has considering to detect nearby objects and accurately measure the separation distance in the autonomous navigation. It is calculates the distance by recognizing the time differences between the reflected beams and it allows precise distance measurements. But it also has the disadvantage that the recognition rate of object in the atmospheric environment can be reduced. In this paper, point cloud data by triangular functions and Line Regression model are used to implement measurement algorithm, that has improved detecting objects in real time and reduce the error of measuring separation distances based on improved reliability of raw data from LiDAR sensor. It has verified that the range of object detection errors can be improved by using the Python imaging library.

Extraction of Workers and Heavy Equipment and Muliti-Object Tracking using Surveillance System in Construction Sites (건설 현장 CCTV 영상을 이용한 작업자와 중장비 추출 및 다중 객체 추적)

  • Cho, Young-Woon;Kang, Kyung-Su;Son, Bo-Sik;Ryu, Han-Guk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.5
    • /
    • pp.397-408
    • /
    • 2021
  • The construction industry has the highest occupational accidents/injuries and has experienced the most fatalities among entire industries. Korean government installed surveillance camera systems at construction sites to reduce occupational accident rates. Construction safety managers are monitoring potential hazards at the sites through surveillance system; however, the human capability of monitoring surveillance system with their own eyes has critical issues. A long-time monitoring surveillance system causes high physical fatigue and has limitations in grasping all accidents in real-time. Therefore, this study aims to build a deep learning-based safety monitoring system that can obtain information on the recognition, location, identification of workers and heavy equipment in the construction sites by applying multiple object tracking with instance segmentation. To evaluate the system's performance, we utilized the Microsoft common objects in context and the multiple object tracking challenge metrics. These results prove that it is optimal for efficiently automating monitoring surveillance system task at construction sites.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Multiple Moving Person Tracking based on the IMPRESARIO Simulator

  • Kim, Hyun-Deok;Jin, Tae-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.877-881
    • /
    • 2008
  • In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. To achieve this goal, we propose a method for 3D walking human tracking based on the IMPRESARIO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers have been also presented. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.

  • PDF