• Title/Summary/Keyword: Pedestrian and Vehicle Detection

Search Result 32, Processing Time 0.029 seconds

V2P Communications for Safety

  • Eyobu, Odongo Steven;Joo, Jhihoon;Han, Dong Seog
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.13-16
    • /
    • 2015
  • In any mobile ad hoc environment, collision amongst mobile objects is always likely to occur unless there is a certain level of intelligence to detect and avoid the collision. This phenomenon of detection and avoidance is the key attribute for safety applications in vehicle to pedestrian (V2P) communications systems. In this paper, we propose a V2P communications concept for collision detection and avoidance.

  • PDF

Spatial Multilevel Optical Flow Architecture-based Dynamic Motion Estimation in Vehicular Traffic Scenarios

  • Fuentes, Alvaro;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5978-5999
    • /
    • 2018
  • Pedestrian detection is a challenging area in the intelligent vehicles domain. During the last years, many works have been proposed to efficiently detect motion in images. However, the problem becomes more complex when it comes to detecting moving areas while the vehicle is also moving. This paper presents a variational optical flow-based method for motion estimation in vehicular traffic scenarios. We introduce a framework for detecting motion areas with small and large displacements by computing optical flow using a multilevel architecture. The flow field is estimated at the shortest level and then successively computed until the largest level. We include a filtering parameter and a warping process using bicubic interpolation to combine the intermediate flow fields computed at each level during optimization to gain better performance. Furthermore, we find that by including a penalization function, our system is able to effectively reduce the presence of outliers and deal with all expected circumstances in real scenes. Experimental results are performed on various image sequences from Daimler Pedestrian Dataset that includes urban traffic scenarios. Our evaluation demonstrates that despite the complexity of the evaluated scenes, the motion areas with both moving and static camera can be effectively identified.

3D Detection of Obstacle Distribution and Mapping for Walking Guide of the Blind (시각 장애인 보행안내를 위한 장애물 분포의 3차원 검출 및 맵핑)

  • Yoon, Myoung-Jong;Jeong, Gu-Young;Yu, Kee-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.2
    • /
    • pp.155-162
    • /
    • 2009
  • In walking guide robot, a guide vehicle detects an obstacle distribution in the walking space using range sensors, and generates a 3D grid map to map the obstacle information and the tactile display. And the obstacle information is transferred to a blind pedestrian using tactile feedback. Based on the obstacle information a user plans a walking route and controls the guide vehicle. The algorithm for 3D detection of an obstacle distribution and the method of mapping the generated obstacle map and the tactile display device are proposed in this paper. The experiment for the 3D detection of an obstacle distribution using ultrasonic sensors is performed and estimated. The experimental system consisted of ultrasonic sensors and control system. In the experiment, the detection of fixed obstacles on the ground, the moving obstacle, and the detection of down-step are performed. The performance for the 3D detection of an obstacle distribution and space mapping is verified through the experiment.

Development of Vehicle and/or Obstacle Detection System using Heterogenous Sensors (이종센서를 이용한 차량과 장애물 검지시스템 개발 기초 연구)

  • Jang, Jeong-Ah;Lee, Giroung;Kwak, Dong-Yong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.5
    • /
    • pp.125-135
    • /
    • 2012
  • This paper proposes the new object detection system with two laser-scanners and a camera for classifying the objects and predicting the location of objects on road street. This detection system could be applied the new C-ITS service such as ADAS(Advanced Driver Assist System) or (semi-)automatic vehicle guidance services using object's types and precise position. This study describes the some examples in other countries and feasibility of object detection system based on a camera and two laser-scanners. This study has developed the heterogenous sensor's fusion method and shows the results of implementation at road environments. As a results, object detection system at roadside infrastructure is a useful method that aims at reliable classification and positioning of road objects, such as a vehicle, a pedestrian, and obstacles in a street. The algorithm of this paper is performed at ideal condition, so it need to implement at various condition such as light brightness and weather condition. This paper should help better object detection and development of new methods at improved C-ITS environment.

Using Optical Flow and HoG for Nighttime PDS (야간 PDS를 위한 광학 흐름과 기울기 방향 히스토그램 이용 방법)

  • Cho, Hi-Tek;Yoo, Hyeon-Joong;Kim, Hyoung-Suk;Hwang, Jeng-Neng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.7
    • /
    • pp.1556-1567
    • /
    • 2009
  • The death rate of pedestrian in car accidents in Korea is 2.5 times higher than the average of OECD countries'. If a system that can detect pedestrians and send alarm to drivers is built and reduces the rate, it is worth developing such a pedestrian detection system (PDS). Since the accident rate in which pedestrians are involved is higher at nighttime than in daytime, the adoption of nighttime PDS is being standardized by big auto companies. However, they are usually using night visions or multiple sensors, which are usually expensive. In this paper we suggest a method for nighttime PDS using single wide dynamic range (WDR) monochrome camera in visible spectrum band. In our experiments, pedestrians were accurately detected if only most edges of pedestrians could be obtained.

Designing a smart safe transportation system within a university using object detection algorithm

  • Na Young Lee;Geon Lee;Min Seop Lee;Yun Jung Hong;In-Beom Yang;Jiyoung Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.51-59
    • /
    • 2024
  • In this paper, we propose a novel traffic safety system designed to reduce pedestrian traffic accidents and enhance safety on university campuses. The system involves real-time detection of vehicle speeds in designated areas and the interaction between vehicles and pedestrians at crosswalks. Utilizing the YOLOv5s model and Deep SORT method, the system performs speed measurement and object tracking within specified zones. Second, a condition-based output system is developed for crosswalk areas using the YOLOv5s object detection model to differentiate between pedestrians and vehicles. The functionality of the system was validated in real-time operation. Our system is cost-effective, allowing installation using ordinary smartphones or surveillance cameras. It is anticipated that the system, applicable not only on university campuses but also in similar problem areas, will serve as a solution to enhance safety for both vehicles and pedestrians.

Pilot Implementation of Intelligence System for Accident Prevention at Railway Level Crossing (철도건널목 지능화시스템 시범 구축)

  • Cho, Bong-Kwan;Ryu, Sang-Hwan;Hwang, Hyeon-Chyeol;Jung, Jae-Il
    • Proceedings of the KSR Conference
    • /
    • 2010.06a
    • /
    • pp.1112-1117
    • /
    • 2010
  • The intelligent safety system for level crossing which employs information and communication technology has been developed in USA and Japan, etc. But, in Korea, the relevant research has not been performed. In this paper, we analyze the cause of railway level crossing accidents and the inherent problem of the existing safety equipments. Based on analyzed results, we design the intelligent safety system which prevent collision between a train and a vehicle. This system displays train approaching information in real-time at roadside warning devices, informs approaching train of the detected obstacle in crossing areas, and is interconnected with traffic signal to empty the crossing area before train comes. Especially, we present the video based obstacle detection algorithm and verify its performance with prototype H/W since the abrupt obstacles in crossing areas are the main cause of level crossing accidents. We identify that the presented scheme detects both pedestrian and vehicle with good performance. Currently, we demonstrate developed railway crossing intelligence system at one crossing of Young-dong-seon line of Korail with Sea Train cockpit.

  • PDF

In-Vehicle AR-HUD System to Provide Driving-Safety Information

  • Park, Hye Sun;Park, Min Woo;Won, Kwang Hee;Kim, Kyong-Ho;Jung, Soon Ki
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1038-1047
    • /
    • 2013
  • Augmented reality (AR) is currently being applied actively to commercial products, and various types of intelligent AR systems combining both the Global Positioning System and computer-vision technologies are being developed and commercialized. This paper suggests an in-vehicle head-up display (HUD) system that is combined with AR technology. The proposed system recognizes driving-safety information and offers it to the driver. Unlike existing HUD systems, the system displays information registered to the driver's view and is developed for the robust recognition of obstacles under bad weather conditions. The system is composed of four modules: a ground obstacle detection module, an object decision module, an object recognition module, and a display module. The recognition ratio of the driving-safety information obtained by the proposed AR-HUD system is about 73%, and the system has a recognition speed of about 15 fps for both vehicles and pedestrians.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

High-Performance Vision Engine for Intelligent Vehicles (지능형 자동차용 고성능 영상인식 엔진)

  • Lyuh, Chun-Gi;Chun, Ik-Jae;Suk, Jung-Hee;Roh, Tae Moon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.535-542
    • /
    • 2013
  • In this paper, we proposed a advanced hardware engine architecture for high speed and high detection rate image recognitions. We adopted the HOG-LBP feature extraction algorithm and more parallelized architecture in order to achieve higher detection rate and high throughput. As a simulation result, the designed engine which can search about 90 frames per second detects 97.7% of pedestrians when false positive per window is $10^{-4}$.