• 제목/요약/키워드: real-time car detection

검색결과 60건 처리시간 0.027초

Image-based ship detection using deep learning

  • Lee, Sung-Jun;Roh, Myung-Il;Oh, Min-Jae
    • Ocean Systems Engineering
    • /
    • 제10권4호
    • /
    • pp.415-434
    • /
    • 2020
  • Detecting objects is important for the safe operation of ships, and enables collision avoidance, risk detection, and autonomous sailing. This study proposes a ship detection method from images and videos taken at sea using one of the state-of-the-art deep neural network-based object detection algorithms. A deep learning model is trained using a public maritime dataset, and results show it can detect all types of floating objects and classify them into ten specific classes that include a ship, speedboat, and buoy. The proposed deep learning model is compared to a universal trained model that detects and classifies objects into general classes, such as a person, dog, car, and boat, and results show that the proposed model outperforms the other in the detection of maritime objects. Different deep neural network structures are then compared to obtain the best detection performance. The proposed model also shows a real-time detection speed of approximately 30 frames per second. Hence, it is expected that the proposed model can be used to detect maritime objects and reduce risks while at sea.

Real-Time Implementation of Acoustic Echo Canceller Using TMS320C6711 DSK

  • Heo, Won-Chul;Bae, Keun-Sung
    • 음성과학
    • /
    • 제15권1호
    • /
    • pp.75-83
    • /
    • 2008
  • The interior of an automobile is a very noisy environment with both stationary cruising noise and the reverberated music or speech coming out from the audio system. For robust speech recognition in a car environment, it is necessary to extract a driver's voice command well by removing those background noises. Since we can handle the music and speech signals from an audio system in a car, the reverberated music and speech sounds can be removed using an acoustic echo canceller. In this paper, we implement an acoustic echo canceller with robust double-talk detection algorithm using TMS-320C6711 DSK. First we developed the echo canceller on the PC for verifying the performance of echo cancellation, then implemented it on the TMS320C6711 DSK. For processing of one speech sample with 8kHz sampling rate and 256 filter taps of the echo canceller, the implemented system used only 0.035ms and achieved the ERLE of 20.73dB.

  • PDF

실시간 모니터링을 통한 레일절손 검지에 관한 연구 (A Study of Detecting Broken Rail using the Real-time Monitoring System)

  • 김태건;엄범규;이희성
    • 한국안전학회지
    • /
    • 제28권4호
    • /
    • pp.1-7
    • /
    • 2013
  • Train accidents can be directly connected to fatal accidents-collision, derailment, Fire, railway crossing accidents-resulting in tremendous human casualties. First of all, the railway derailment is not only related to most of railway accidents but also it can lead to much more catastrophic accompanying train overtured than other factors. Therefore, it is most important factor to ensure railway safety. some foreign countries have applied to the detector machines(e.g., ultrasonic detector car, sleep mode, current detector, optical sensing, optical fiber). Since it was developed in order to prevent train from being derailed. In korea, the existing track method has been used to monitor rail condition using track circuit. However, we found out it impossible for Communication Based Train Control system(CBTC), recent technology to detect rail condition using balise(data transmission devices) without no track circuit. For this reason, it is needed instantly to develop real-time monitoring system used to detect broken rails. Firstly, this paper presents domestic and international statues analysis of rail breaks technology. Secondly, the composition and the characteristics of the real-time monitoring system. Finally, the evidence that this system could assumed the location and type of broken rails was proved by the experiment of prototype and operation line tests. We concluded that this system can detect rail break section in which error span exist within${\pm}1m$.

가상 데이터를 활용한 번호판 문자 인식 및 차종 인식 시스템 제안 (Proposal for License Plate Recognition Using Synthetic Data and Vehicle Type Recognition System)

  • 이승주;박구만
    • 방송공학회논문지
    • /
    • 제25권5호
    • /
    • pp.776-788
    • /
    • 2020
  • 본 논문에서는 딥러닝을 이용한 차종 인식과 자동차 번호판 문자 인식 시스템을 제안한다. 기존 시스템에서는 영상처리를 통한 번호판 영역 추출과 DNN을 이용한 문자 인식 방법을 사용하였다. 이러한 시스템은 환경이 변화되면 인식률이 하락되는 문제가 있다. 따라서, 제안하는 시스템은 실시간 검출과 환경 변화에 따른 정확도 하락에 초점을 맞춰 1-stage 객체 검출 방법인 YOLO v3를 사용하였으며, RGB 카메라 한 대로 실시간 차종 및 번호판 문자 인식이 가능하다. 학습데이터는 차종 인식과 자동차 번호판 영역 검출의 경우 실제 데이터를 사용하며, 자동차 번호판 문자 인식의 경우 가상 데이터만을 사용하였다. 각 모듈별 정확도는 차종 검출은 96.39%, 번호판 검출은 99.94%, 번호판 검출은 79.06%를 기록하였다. 이외에도 YOLO v3의 경량화 네트워크인 YOLO v3 tiny를 이용하여 정확도를 측정하였다.

New Vehicle Verification Scheme for Blind Spot Area Based on Imaging Sensor System

  • Hong, Gwang-Soo;Lee, Jong-Hyeok;Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • 제4권1호
    • /
    • pp.9-18
    • /
    • 2017
  • Ubiquitous computing is a novel paradigm that is rapidly gaining in the scenario of wireless communications and telecommunications for realizing smart world. As rapid development of sensor technology, smart sensor system becomes more popular in automobile or vehicle. In this study, a new vehicle detection mechanism in real-time for blind spot area is proposed based on imaging sensors. To determine the position of other vehicles on the road is important for operation of driver assistance systems (DASs) to increase driving safety. As the result, blind spot detection of vehicles is addressed using an automobile detection algorithm for blind spots. The proposed vehicle verification utilizes the height and angle of a rear-looking vehicle mounted camera. Candidate vehicle information is extracted using adaptive shadow detection based on brightness values of an image of a vehicle area. The vehicle is verified using a training set with Haar-like features of candidate vehicles. Using these processes, moving vehicles can be detected in blind spots. The detection ratio of true vehicles was 91.1% in blind spots based on various experimental results.

탑뷰 영상을 이용한 차선, 정지선 및 과속방지턱 인식 (Recognition of Lanes, Stop Lines and Speed Bumps using Top-view Images)

  • 안영선;곽성우;양정민
    • 전기학회논문지
    • /
    • 제65권11호
    • /
    • pp.1879-1886
    • /
    • 2016
  • In this paper, we propose a real-time recognition algorithm of lanes, stop lines and speed bumps on roads for autonomous vehicles. First, we generate a top-view using the image transmitted from a camera that is installed to see the front of a vehicle. To speed up the processing, we simplify the mapping algorithm in constructing a top-view wherein the region of interest (ROI) is concerned. The features of lanes, stop lines and speed bumps, which are composed of lines, are searched in the edge image of the top-view, then followed by labeling and clustering specialized to detect straight lines. The width of lines, distances from the center of a vehicle, and curvature of each cluster are considered to select final candidates. We verify the proposed algorithm on real roads using the commercial car (KIA K7) which is converted into an autonomous vehicle.

안전띠 착용 유무에 근거한 두 단계의 충돌 가혹도 수준을 갖는 충돌 판별 알고리즘 (Crash Discrimination Algorithm with Two Crash Severity Levels Based on Seat-belt Status)

  • 박서욱;이재협
    • 한국자동차공학회논문집
    • /
    • 제11권2호
    • /
    • pp.148-156
    • /
    • 2003
  • Many car manufacturers have frequently adopted an aggressive inflator and a lower threshold speed for airbag deployment in order to meet an injury requirement for unbolted occupant at high speed crash test. Consequently, today's occupant safety restraint system has a weakness due to an airbag induced injury at low speed crash event. This paper proposes a new crash algorithm to improve the weakness by suppressing airbag deployment at low speed crash event in case of belted condition. The proposed algorithm consists of two major blocks-crash severity algorithm and deployment logic block. The first block decides crash severity with two levels by means of velocity and crash energy calculation from acceleration signal. The second block implemented by simple AND/OR logic combines the crash severity level and seat belt status information to generate firing commands for airbag and belt pretensioner. Furthermore, it can be extended to adopt additional sensor information from passenger presence detection sensor and safing sensor. A simulation using real crash data for a 1,800cc passenger vehicle has been conducted to verify the performance of proposed algorithm.

실시간 차선인식 알고리즘을 위한 최적의 멀티코어 아키텍처 디자인 공간 탐색 (Optimal Design Space Exploration of Multi-core Architecture for Real-time Lane Detection Algorithm)

  • 정인규;김종면
    • 예술인문사회 융합 멀티미디어 논문지
    • /
    • 제7권3호
    • /
    • pp.339-349
    • /
    • 2017
  • 본 논문에서는 주행 중인 차량의 차선 인식을 위해 4단계로 구성된 알고리즘을 제안한다. 첫 번째 단계에서는 관심영역 추출한다. 두 번째 단계에서는 신호 잡음을 제기하기 위해 중간 값 필터를 이용한다. 세 번째 단계에서는 입력되는 이미지의 배경과 전경의 두 클래스로 구분하기 위한 이진화 알고리즘을 수행한다. 마지막 단계에서는 이진화 과정 후에 남아 있는 노이즈나 불완전한 에지 등을 제거하여 선명한 차선을 얻기 위해 이미지 침식 알고리즘을 이용한다. 하지만 이러한 차선 인식 앍고리즘은 높은 계산량을 요구하여 실시간 처리가 어려운 실정이다. 따라서 본 논문에서는 멀티코어 아키텍처를 이용하여 실시간 차선이탈 감지 알고리즘을 병렬구현 한다. 또한, 차선이탈 감지 알고리즘을 위한 최적의 멀티코어 아키텍처의 구조를 탐색하기 위해 총 8가지의 서로 다른 프로세싱 엘리먼트 구조를 이용하여 실험하였고, 모의실험 결과 40×40의 프로세싱 엘리먼트 구조에서 최적의 성능, 에너지 효율 및 면적 효율을 보였다.

적응형 헤드 램프 컨트롤을 위한 야간 차량 인식 (Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System)

  • 김현구;정호열;박주현
    • 대한임베디드공학회논문지
    • /
    • 제6권1호
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform