• Title/Summary/Keyword: Dash Camera

Search Result 5, Processing Time 0.018 seconds

Development of Board for EMI on Dash Camera with 360° Omnidirectional Angle (360° 전방위 화각을 가진 Dash Camera의 EMI 대응을 위한 Board 개발)

  • Lee, Hee-Yeol;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.248-251
    • /
    • 2017
  • In this paper, The proposed board is developed by EMI compliant Dash Camera with $360^{\circ}$ omni angle. The proposed board is designed by designing DM and CM input noise reduction circuit and applying active EMI filter coupling circuit. The DM and CM input noise reduction circuit design uses a differential op amp circuit to obtain the DM noise coupled to the input signal via the parasitic capacitance(CP). In order to simplify the circuit by applying the active EMI filter coupling circuit, a noise separator is installed to compensate the noise of the EMI source to compensate the CM and DM active filter simultaneously. In order to evaluate the performance of the board for the proposed EMI response, an authorized accreditation body has confirmed that the electromagnetic certification standard for each frequency band is satisfied.

Vehicle-Level Traffic Accident Detection on Vehicle-Mounted Camera Based on Cascade Bi-LSTM

  • Son, Hyeon-Cheol;Kim, Da-Seul;Kim, Sung-Young
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.167-175
    • /
    • 2020
  • In this paper, we propose a traffic accident detection on vehicle-mounted camera. In the proposed method, the minimum bounding box coordinates the central coordinates on the bird's eye view and motion vectors of each vehicle object, and ego-motions of the vehicle equipped with dash-cam are extracted from the dash-cam video. By using extracted 4 kinds features as the input of Bi-LSTM (bidirectional LSTM), the accident probability (score) is predicted. To investigate the effect of each input feature on the probability of an accident, we analyze the performance of the detection the case of using a single feature input and the case of using a combination of features as input, respectively. And in these two cases, different detection models are defined and used. Bi-LSTM is used as a cascade, especially when a combination of the features is used as input. The proposed method shows 76.1% precision and 75.6% recall, which is superior to our previous work.

Implementation of a unified live streaming based on HTML5 for an IP camera (IP 카메라를 위한 HTML5 기반 통합형 Live Streaming 구현)

  • Ryu, Hong-Nam;Yang, Gil-Jin;Kim, Jong-Hun;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.28 no.9
    • /
    • pp.99-104
    • /
    • 2014
  • This paper presents a unified live-streaming method based on Hypertext Mark-up Language 5(HTML5) for an IP camera which is independent of browsers of clients and is implemented with open-source libraries. Currently, conventional security systems based on analog CCTV cameras are being modified to newer surveillance systems utilizing IP cameras. These cameras offer remote surveillance and monitoring regardless of the device being used at any time, from any location. However, this approach needs live-streaming protocols to be implemented in order to verify real-time video streams and surveillance is possible after installation of separate plug-ins or special software. Recently, live streaming is being conducted through HTML5 using two different standard protocols: HLS and DASH, that works with Apple and Android products respectively. This paper proposes a live-streaming approach that is linked on either of the two protocols which makes the system independent with the browser or OS. The client is possible to monitor real-time video streams without the need of any additional plug-ins. Moreover, by implementing open source libraries, development costs and time were economized. We verified usefulness of the proposed approach through mobile devices and extendability to other various applications of the system.

Precision Evaluation of Expressway Incident Detection Based on Dash Cam (차량 내 영상 센서 기반 고속도로 돌발상황 검지 정밀도 평가)

  • Sanggi Nam;Younshik Chung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.114-123
    • /
    • 2023
  • With the development of computer vision technology, video sensors such as CCTV are detecting incident. However, most of the current incident have been detected based on existing fixed imaging equipment. Accordingly, there has been a limit to the detection of incident in shaded areas where the image range of fixed equipment is not reached. With the recent development of edge-computing technology, real-time analysis of mobile image information has become possible. The purpose of this study is to evaluate the possibility of detecting expressway emergencies by introducing computer vision technology to dash cam. To this end, annotation data was constructed based on 4,388 dash cam still frame data collected by the Korea Expressway Corporation and analyzed using the YOLO algorithm. As a result of the analysis, the prediction accuracy of all objects was over 70%, and the precision of traffic accidents was about 85%. In addition, in the case of mAP(mean Average Precision), it was 0.769, and when looking at AP(Average Precision) for each object, traffic accidents were the highest at 0.904, and debris were the lowest at 0.629.

Development of a Emergency Situation Detection Algorithm Using a Vehicle Dash Cam (차량 단말기 기반 돌발상황 검지 알고리즘 개발)

  • Sanghyun Lee;Jinyoung Kim;Jongmin Noh;Hwanpil Lee;Soomok Lee;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.97-113
    • /
    • 2023
  • Swift and appropriate responses in emergency situations like objects falling on the road can bring convenience to road users and effectively reduces secondary traffic accidents. In Korea, current intelligent transportation system (ITS)-based detection systems for emergency road situations mainly rely on loop detectors and CCTV cameras, which only capture road data within detection range of the equipment. Therefore, a new detection method is needed to identify emergency situations in spatially shaded areas that existing ITS detection systems cannot reach. In this study, we propose a ResNet-based algorithm that detects and classifies emergency situations from vehicle camera footage. We collected front-view driving videos recorded on Korean highways, labeling each video by defining the type of emergency, and training the proposed algorithm with the data.