• 제목/요약/키워드: vehicle-mounted camera

검색결과 63건 처리시간 0.03초

Vehicle-Level Traffic Accident Detection on Vehicle-Mounted Camera Based on Cascade Bi-LSTM

  • Son, Hyeon-Cheol;Kim, Da-Seul;Kim, Sung-Young
    • 한국정보기술학회 영문논문지
    • /
    • 제10권2호
    • /
    • pp.167-175
    • /
    • 2020
  • In this paper, we propose a traffic accident detection on vehicle-mounted camera. In the proposed method, the minimum bounding box coordinates the central coordinates on the bird's eye view and motion vectors of each vehicle object, and ego-motions of the vehicle equipped with dash-cam are extracted from the dash-cam video. By using extracted 4 kinds features as the input of Bi-LSTM (bidirectional LSTM), the accident probability (score) is predicted. To investigate the effect of each input feature on the probability of an accident, we analyze the performance of the detection the case of using a single feature input and the case of using a combination of features as input, respectively. And in these two cases, different detection models are defined and used. Bi-LSTM is used as a cascade, especially when a combination of the features is used as input. The proposed method shows 76.1% precision and 75.6% recall, which is superior to our previous work.

이동체 내의 헬멧 방위각 추적 시스템 구현 (Implementation of a Helmet Azimuth Tracking System in the Vehicle)

  • 이지훈;정해
    • 한국정보통신학회논문지
    • /
    • 제24권4호
    • /
    • pp.529-535
    • /
    • 2020
  • 적군의 화력에 대비하여 사방이 철갑으로 둘러싸인 장갑차에서 조종수의 외부시야 확보가 필수적이다. 이를 위하여 차량에 360도 회전 가능한 감시카메라를 장착한다. 이 때 헬멧을 착용한 조종수의 머리 방향을 인식하여 외부의 카메라도 정확하게 같은 방향으로 회전하도록 하는 것이 관건이다. 본 논문에서는 MEMS 기반의 AHRS 센서와 조도 센서를 사용하여 기존의 광학적 방식이 가지고 있는 단점을 보완하고, 저렴하게 구현하는 방안을 소개한다. 핵심 아이디어는 카메라의 위치에 장착된 센서와 헬멧에 장착된 센서가 검출하는 오일러 각의 차이를 이용하여 카메라의 방향을 설정하고, 센서의 드리프트 오차를 제거하기 위해 수시로 조도 센서로 보정하는 것이다. 구현된 시작품 통하여 조종수가 바라보는 방향으로 카메라의 방향이 정확하게 일치되는 것을 보여줄 것이다.

초기 차량 검출 및 거리 추정을 중심으로 한 차량 추적 알고리즘 (A Vehicle Tracking Algorithm Focused on the Initialization of Vehicle Detection-and Distance Estimation)

  • 이철헌;설성욱;김효성;남기곤;주재흠
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권11호
    • /
    • pp.1496-1504
    • /
    • 2004
  • 본 논문에서는 도로상에 운행중인 차량에 장착되어진 정방향 카메라로 획득한 스테레오 연속영상으로부터 추적대상 차량을 검출하고 추적중인 차량과의 거리를 추정하는 알고리즘을 제안한다. 차량의 검출은 차선의 인식을 이용하여 도로 영역을 추출하고, 추출된 도로영역에서 차량의 특징 검색을 수행한다. 추적중인 차량과의 거리는 스테레오영상으로부터 TSS(three step search) 코릴로그램 정합 방법을 이용하여 추정된다. 제안된 방법은 컴퓨터 모의실험을 통하여 움직이는 카메라로부터 획득된 영상에서 추적하고자 하는 차량을 분리하고 정합하여 추적됨을 보였다.

HMD를 이용한 사용자 자세 기반 항공 촬영용 쿼드로터 시스템 제어 인터페이스 개발 (A Posture Based Control Interface for Quadrotor Aerial Video System Using Head-Mounted Display)

  • 김재승;정종민;김한솔;황남웅;최윤호;박진배
    • 전기학회논문지
    • /
    • 제64권7호
    • /
    • pp.1056-1063
    • /
    • 2015
  • In this paper, we develop an interface for aerial photograph platform which consists of a quadrotor and a gimbal using the human body and the head posture. As quadrotors have been widely adopted in many industries such as aerial photography, remote surveillance, and maintenance of infrastructures, the demand of aerial video and photograph has been increasing remarkably. Stick type remote controllers are widely used to control a quadrotor, but this method is not an intuitive way of controlling the aerial vehicle and the camera simultaneously. Therefore, a new interface which controls the serial photograph platform is presented. The presented interface uses the human head movement measured by head-mounted display as a reference for controlling the camera angle, and the human body posture measured from Kinect for controlling the attitude of the quadrotor. As the image captured by the camera is displayed on the head-mounted display simultaneously, the user can feel flying experience and intuitively control the quadrotor and the camera. Finally, the performance of the developed system shown to verify the effectiveness and superiority of the presented interface.

자율무인잠수정의 수중 도킹을 위한 비쥬얼 서보 제어 알고리즘 (A Visual Servo Algorithm for Underwater Docking of an Autonomous Underwater Vehicle (AUV))

  • 이판묵;전봉환;이종무
    • 한국해양공학회지
    • /
    • 제17권1호
    • /
    • pp.1-7
    • /
    • 2003
  • Autonomous underwater vehicles (AUVs) are unmanned, underwater vessels that are used to investigate sea environments in the study of oceanography. Docking systems are required to increase the capability of the AUVs, to recharge the batteries, and to transmit data in real time for specific underwater works, such as repented jobs at sea bed. This paper presents a visual :em control system used to dock an AUV into an underwater station. A camera mounted at the now center of the AUV is used to guide the AUV into dock. To create the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and deriver a state equation for the visual servo AUV. Further, this paper proposes a discrete-time MIMO controller, minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servo AUV simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

차량 뒷바퀴 윤곽선을 이용한 근거리 전방차량인식 (Recognition of a Close Leading Vehicle Using the Contour of the Vehicles Wheels)

  • 노광현;한민홍
    • 제어로봇시스템학회논문지
    • /
    • 제7권3호
    • /
    • pp.238-245
    • /
    • 2001
  • This paper describes a method for detecting a close leading vehicle using the contour of the vehi-cles rear wheels. The contour of a leading vehicles rear wheels in 속 front road image from a B/W CCD camera mounted on the central front bumper of the vehicle, has vertical components and can be discerned clearly in contrast to the road surface. After extracting positive edges and negative edges using the Sobel op-erator in the raw image, every point that can be recognized as a feature of the contour of the leading vehicle wheel is determined. This process can detect the presence of a close leading vehicle, and it is also possible to calculate the distance to the leading vehicle and the lateral deviation angle. This method might be useful for developing and LSA (Low Speed Automation) system that can relieve drivers stress in the stop-and-go traffic conditions encoun-tered on urban roads.

  • PDF

자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발 (Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle)

  • 노광현
    • 한국자동차공학회논문집
    • /
    • 제13권4호
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.

어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지 (Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image)

  • 최윤원;권기구;김종효;나경진;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권8호
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).