• Title/Summary/Keyword: Vehicle extraction

Search Result 254, Processing Time 0.02 seconds

The study of the extraction of middle-aged driver's cognitive map on the Instrument Panel and comparison with the real vehicle (Instrument Panel에 대한 중년 운전자 인지지도 형상 추출 및 실제 차량 형상과의 비교에 관한 연구)

  • Yu, Seung-Dong;Park, Peom
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.61
    • /
    • pp.1-11
    • /
    • 2000
  • Ergonomic vehicle design is very important for driver's safety and sensibility. Many studies have emphasized the physical factors of human operator and usability of control devices. However, driver's cognitive factors such as the shape of cognitive map have not been well documented. The aim of this research is to find the relationship between the shape of Instrument Pane (IP) in driver's cognitive map and the real vehicle. To do this, Sketch Map Method (SMM), that is an extraction method of cognitive map, was employed to extract the shape of middle-aged driver's cognitive map. In this study, SMM was modified to formulate driver's cognitive map because this process is not being in the existing SMM. Next, correlation was analyzed between individual cognitive map and the shape of real vehicle's IP. The result showed that the position of volume control switch and cigar jack was similar between these but the position of others wasn't.

  • PDF

Vision Based Vehicle Detection and Traffic Parameter Extraction (비젼 기반 차량 검출 및 교통 파라미터 추출)

  • 하동문;이종민;김용득
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.610-620
    • /
    • 2003
  • Various shadows are one of main factors that cause errors in vision based vehicle detection. In this paper, two simple methods, land mark based method and BS & Edge method, are proposed for vehicle detection and shadow rejection. In the experiments, the accuracy of vehicle detection is higher than 96%, during which the shadows arisen from roadside buildings grew considerably. Based on these two methods, vehicle counting, tracking, classification, and speed estimation are achieved so that real-time traffic parameters concerning traffic flow can be extracted to describe the load of each lane.

Hilbert transform based approach to improve extraction of "drive-by" bridge frequency

  • Tan, Chengjun;Uddin, Nasim
    • Smart Structures and Systems
    • /
    • v.25 no.3
    • /
    • pp.265-277
    • /
    • 2020
  • Recently, the concept of "drive-by" bridge monitoring system using indirect measurements from a passing vehicle to extract key parameters of a bridge has been rapidly developed. As one of the most key parameters of a bridge, the natural frequency has been successfully extracted theoretically and in practice using indirect measurements. The frequency of bridge is generally calculated applying Fast Fourier Transform (FFT) directly. However, it has been demonstrated that with the increase in vehicle velocity, the estimated frequency resolution of FFT will be very low causing a great extracted error. Moreover, because of the low frequency resolution, it is hard to detect the frequency drop caused by any damages or degradation of the bridge structural integrity. This paper will introduce a new technique of bridge frequency extraction based on Hilbert Transform (HT) that is not restricted to frequency resolution and can, therefore, improve identification accuracy. In this paper, deriving from the vehicle response, the closed-form solution associated with bridge frequency removing the effect of vehicle velocity is discussed in the analytical study. Then a numerical Vehicle-Bridge Interaction (VBI) model with a quarter car model is adopted to demonstrate the proposed approach. Finally, factors that affect the proposed approach are studied, including vehicle velocity, signal noise, and road roughness profile.

Lane Violation Detection System Using Feature Tracking (특징점 추적을 이용한 끼어들기 위반차량 검지 시스템)

  • Lee, Hee-Sin;Lee, Joon-Whoan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.2
    • /
    • pp.36-44
    • /
    • 2009
  • In this paper, we suggest a system of detecting a vehicle with lane violation, which can detect the vehicle with lane violation, by using the feature point tracking. The whole algorithm in the suggested system of detecting a vehicle with lane violation is composed of three stages such as feature extraction, register and tracking in feature for the tracking-targeted vehicle, and detecting a vehicle with lane violation. In the stage of feature extraction, the feature is extracted from the inputted image by sing the feature-extraction algorithm available for the real-time processing. The extracted features are again selected the racking-targeted feature. The registered feature is tracked by using NCC(normalized cross correlation). Finally, whether or not lane violation is finally detected by using information on the tracked features. As a result of experimenting the suggested system by using the acquired image in the section with a ban on intervention, the excellent performance was shown with 99.09% for positive recognition ratio and 0.9% for error ratio. The fast processing speed could be obtained in 34.48 frames per second available for real-time processing.

  • PDF

Deep Learning-based Vehicle Anomaly Detection using Road CCTV Data (도로 CCTV 데이터를 활용한 딥러닝 기반 차량 이상 감지)

  • Shin, Dong-Hoon;Baek, Ji-Won;Park, Roy C.;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.1-6
    • /
    • 2021
  • In the modern society, traffic problems are occurring as vehicle ownership increases. In particular, the incidence of highway traffic accidents is low, but the fatality rate is high. Therefore, a technology for detecting an abnormality in a vehicle is being studied. Among them, there is a vehicle anomaly detection technology using deep learning. This detects vehicle abnormalities such as a stopped vehicle due to an accident or engine failure. However, if an abnormality occurs on the road, it is possible to quickly respond to the driver's location. In this study, we propose a deep learning-based vehicle anomaly detection using road CCTV data. The proposed method preprocesses the road CCTV data. The pre-processing uses the background extraction algorithm MOG2 to separate the background and the foreground. The foreground refers to a vehicle with displacement, and a vehicle with an abnormality on the road is judged as a background because there is no displacement. The image that the background is extracted detects an object using YOLOv4. It is determined that the vehicle is abnormal.

A Study on Character Extraction Algorithm for Vehicle License Plate Recognition (자동차번호판 자동인식을 위한 문자추출에 관한 연구)

  • Kim, Jae-Kwang;Choi, Hwan-Soo
    • Proceedings of the KIEE Conference
    • /
    • 1995.07b
    • /
    • pp.965-967
    • /
    • 1995
  • One of the most difficult tasks in the process of automatic vehicle license plate recognition is the extraction of each character from within license plate region. In many cases, characters, especially serial numbers of plates are connected together due to noise and plate accessories. The recognition process may not be successful without extracting these characters effectively. This paper presents an algorithm to extract these connected characters very effectively. The algorithm utilizes mathematical morphology, connected component analysis, and gradient filters for character extraction. The paper also presents thorough experimental results as well as details of the algorithm.

  • PDF

A Study On Automatic Background Extraction and Updating Method (자동 배경 영상 추출 및 갱신 방법에 관한 연구)

  • 김덕래;하동문;김용득
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.35-38
    • /
    • 2003
  • In this paper, I propose an automatic background extraction method and continuous background updating technique. Because there is a movement of a vehicle and a change of a background is feeble, the area moving through the time axis is looked for and a background and a vehicle image is divided. A way to give dynamically the threshold which divides the image frame into a vehicle image and the background in a space is enforced. Through the repetition of the above-mentioned process, the background pictorial image is gained. Using the karlman filter technique, the update is done so that a background image can obey a climate situation and an environmental change in day and night. A background image processed algorithm is better than the existent one. Through simulation, the feasibility of the algorithm has been verified.

  • PDF

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

Image Feature-based Electric Vehicle Detection and Classification System Using Machine Learning (머신 러닝을 이용한 영상 특징 기반 전기차 검출 및 분류 시스템)

  • Kim, Sanghyuk;Kang, Suk-Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1092-1099
    • /
    • 2017
  • This paper proposes a novel way of vehicle detection and classification based on image features. There are two main processes in the proposed system, which are database construction and vehicle classification processes. In the database construction, there is a tight censorship for choosing appropriate images of the training set under the rigorous standard. These images are trained using Haar features for vehicle detection and histogram of oriented gradients extraction for vehicle classification based on the support vector machine. Additionally, in the vehicle detection and classification processes, the region of interest is reset using a number plate to reduce complexity. In the experimental results, the proposed system had the accuracy of 0.9776 and the $F_1$ score of 0.9327 for vehicle classification.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.