• Title/Summary/Keyword: pedestrian detection

Search Result 187, Processing Time 0.029 seconds

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

A Study on Radar Video Fusion Systems for Pedestrian and Vehicle Detection (보행자 및 차량 검지를 위한 레이더 영상 융복합 시스템 연구)

  • Sung-Youn Cho;Yeo-Hwan Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.197-205
    • /
    • 2024
  • Development of AI and big data-based algorithms to advance and optimize the recognition and detection performance of various static/dynamic vehicles in front and around the vehicle at a time when securing driving safety is the most important point in the development and commercialization of autonomous vehicles. etc. are being studied. However, there are many research cases for recognizing the same vehicle by using the unique advantages of radar and camera, but deep learning image processing technology is not used, or only a short distance is detected as the same target due to radar performance problems. Therefore, there is a need for a convergence-based vehicle recognition method that configures a dataset that can be collected from radar equipment and camera equipment, calculates the error of the dataset, and recognizes it as the same target. In this paper, we aim to develop a technology that can link location information according to the installation location because data errors occur because it is judged as the same object depending on the installation location of the radar and CCTV (video).

Real-Time Multi-Objects Detection and Interest Pedestrian Tracking in Auto-Controlled Camera Environment (제어 가능한 카메라 환경에서 실시간 다수 물체 검출 및 관심 보행자 추적)

  • Lee, Byung-Sun;Rhee, Eun-Joo
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2007.05a
    • /
    • pp.38-46
    • /
    • 2007
  • 본 논문에서는 실시간으로 획득된 영상을 분석하여 움직이는 다수 물체를 검출하고, 카메라를 자동 제어하여 관심 보행자만을 추적하는 시스템을 제안한다. 다수 물체 영역 검출은 차영상과 이전변환 밀도값을 이용한다. 검출된 다수 물체 영역에서 사람의 구조적 정보와 형태 정보를 이용하여 나무들의 흔들림으로 인한 영역이나 차량의 움직임 영역은 제거되고, 관심 보행자 영역만을 검출하였다. 관심 보행자 추적은 무게중심 차를 이용한 움직임 정보와 k-means 알고리즘으로 구한 세 점의 평균 색상 정보를 이용한다. 원거리 관심 보행자는 인식률을 높이기 위해 줌을 실행하여 확대하고, 관심 보행자의 화면상 위치에 따라 카메라 방향을 자동으로 조정하여 관심 보행자반을 연속적으로 추적한다. 실험 결과, 제안한 시스템은 실시간으로 움직이는 다수 물체를 검출하고, 사람의 구조적 특정과 형태 정보로 관심 보행자만을 검출할 수 있었고, 움직임 정보와 색상정보로 관심 보행자를 연속적으로 추적할 수 있었다.

  • PDF

In-Vehicle AR-HUD System to Provide Driving-Safety Information

  • Park, Hye Sun;Park, Min Woo;Won, Kwang Hee;Kim, Kyong-Ho;Jung, Soon Ki
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1038-1047
    • /
    • 2013
  • Augmented reality (AR) is currently being applied actively to commercial products, and various types of intelligent AR systems combining both the Global Positioning System and computer-vision technologies are being developed and commercialized. This paper suggests an in-vehicle head-up display (HUD) system that is combined with AR technology. The proposed system recognizes driving-safety information and offers it to the driver. Unlike existing HUD systems, the system displays information registered to the driver's view and is developed for the robust recognition of obstacles under bad weather conditions. The system is composed of four modules: a ground obstacle detection module, an object decision module, an object recognition module, and a display module. The recognition ratio of the driving-safety information obtained by the proposed AR-HUD system is about 73%, and the system has a recognition speed of about 15 fps for both vehicles and pedestrians.

Design of Pedestrian Detection System Based on Optimized pRBFNNs Pattern Classifier Using HOG Features and PCA (PCA와 HOG특징을 이용한 최적의 pRBFNNs 패턴분류기 기반 보행자 검출 시스템의 설계)

  • Lim, Myeoung-Ho;Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.1345-1346
    • /
    • 2015
  • 본 논문에서는 보행자 및 배경 이미지로부터 HOG-PCA 특징을 추출하고 다항식 기반 RBFNNs(Radial Basis Function Neural Network) 패턴분류기과 최적화 알고리즘을 이용하여 보행자를 검출하는 시스템 설계를 제안한다. 입력 영상으로부터 보행자를 검출하기 위해 전처리 과정에서 HOG(Histogram of oriented gradient) 알고리즘을 통해 특징을 추출한다. 추출된 특징은 고차원이므로 패턴분류기 분류 시 많은 연산과 처리속도가 따른다. 이를 개선하고자 PCA (Principal Components Analysis)을 사용하여 저차원으로의 차원 축소한다. 본 논문에서 제안하는 분류기는 pRBFNNs 패턴분류기의 효율적인 학습을 위해 최적화 알고리즘인 PSO(Particle Swarm Optimization)을 사용하여 구조 및 파라미터를 최적화시켜 모델의 성능을 향상시킨다. 사용된 데이터로는 보행자 검출에 널리 사용되는 INRIA2005_person data set에서 보행자와 배경 영상을 각각 1200장을 학습 데이터, 검증 데이터로 구성하여 분류기를 설계하고 테스트 이미지를 설계된 최적의 분류기를 이용하여 보행자를 검출하고 검출률을 확인한다.

  • PDF

Ureteral Injury Caused By Blunt Trauma: A Case Report (둔상에 의한 요관 손상 1례)

  • Kwon, Oh Sang;Mun, Yun Su;Woo, Seung Hwo;Han, Hyun Young;Hwang, Jung Joo;Lee, Jang Young;Lee, Min Koo
    • Journal of Trauma and Injury
    • /
    • v.25 no.4
    • /
    • pp.291-295
    • /
    • 2012
  • Ureteral trauma is rare, accounting for less than 1% of all urologic traumas. However, a missed ureteral injury can result in significant morbidity and mortality. The purpose of this case presentation is to suggest another method for early detection of ureteral injury in blunt traumatic patient. A 47-years-old man was injured in pedestrian traffic accident. He undergone 3-phase abdominal CT initially and had had a short-term follow-up simple. We suspected ureteral injury. Our final diagnosis of a ureteral injury was based on follow-up and antegrade pyeloureterography, he underwent emergency surgery. We detected the ureteral injury early and took a definitive action within 24 hours. In blunt trauma, if abnormal fluid collection in the perirenal retroperitoneal space is detect, the presence of a ureteral injury should be suspected, so a short-term simple X-ray or abdominal CT, within a few hours after initial abdominal CT, may be useful.

Accident Prevention Technology at a Level Crossing (철도건널목 사고방지를 위한 방안 연구)

  • Cho, Bong-Kwan;Ryu, Sang-Hwan;Hwang, Hyeon-Chyeol;Jung, Jae-Il
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.12
    • /
    • pp.2220-2227
    • /
    • 2008
  • The safety equipments of railway level crossing which are installed at intersections between roads and railway lines prevent level crossing accidents by informing all of the vehicles and pedestrians of approaching trains. The intelligent safety system for level crossing which employs information and communication technology has been developed in USA and Japan, etc. But, in Korea, the relevant research has not been performed. In this paper, we analyze the cause of railway level crossing accidents and the inherent problem of the existing safety equipments. Based on analyzed results, we design the intelligent safety system which prevent collision between a train and a vehicle. This system displays train approaching information in real-time at roadside warning devices, informs approaching train of the detected obstacle in crossing areas, and is interconnected with traffic signal to empty the crossing area before train comes. Especially, we present the video based obstacle detection algorithm and verify its performance with prototype H/W since the abrupt obstacles in crossing areas are the main cause of level crossing accidents. We identify that the presented scheme detects both pedestrian and vehicle with good performance.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Movement Detection Using Keyframes in Video Surveillance System

  • Kim, Kyutae;Jia, Qiong;Dong, Tianyu;Jang, Euee S.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1249-1252
    • /
    • 2022
  • In this paper, we propose a conceptual framework that identifies video frames in motion containing the movement of people and vehicles in traffic videos. The automatic selection of video frames in motion is an important topic in security and surveillance video because the number of videos to be monitored simultaneously is simply too large due to limited human resources. The conventional method to identify the areas in motion is to compute the differences over consecutive video frames, which has been costly because of its high computational complexity. In this paper, we reduced the overall complexity by examining only the keyframes (or I-frames). The basic assumption is that the time period between I-frames is rather shorter (e.g., 1/10 ~ 3 secs) than the usual length of objects in motion in video (i.e., pedestrian walking, automobile passing, etc.). The proposed method estimates the possibility of videos containing motion between I-frames by evaluating the difference of consecutive I-frames with the long-time statistics of the previously decoded I-frames of the same video. The experimental results showed that the proposed method showed more than 80% accuracy in short surveillance videos obtained from different locations while keeping the computational complexity as low as 20 % compared to the HM decoder.

  • PDF