• Title/Summary/Keyword: vehicle detection algorithm

Search Result 503, Processing Time 0.027 seconds

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

De-blurring Algorithm for Performance Improvement of Searching a Moving Vehicle on Fisheye CCTV Image (어안렌즈사용 CCTV이미지에서 차량 정보 수집의 성능개선을 위한 디블러링 알고리즘)

  • Lee, In-Jung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4C
    • /
    • pp.408-414
    • /
    • 2010
  • When we are collecting traffic information on CCTV images, we have to install the detect zone in the image area during pan-tilt system is on duty. An automation of detect zone with pan-tilt system is not easy because of machine error. So the fisheye lens attached camera or convex mirror camera is needed for getting wide area images. In this situation some troubles are happened, that is a decreased system speed or image distortion. This distortion is caused by occlusion of angled ray as like trembled snapshot in digital camera. In this paper, we propose two methods of de-blurring to overcome distortion, the one is image segmentation by nonlinear diffusion equation and the other is deformation for some segmented area. As the results of doing de-blurring methods, the de-blurring image has 15 decibel increased PSNR and the detection rate of collecting traffic information is more than 5% increasing than in distorted images.

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

Design and Implementation of Radar Signal Processing System for Vehicle Door Collision Prevention (차량 도어 충돌 방지용 레이다 신호처리 시스템 설계 및 구현)

  • Jeongwoo Han;Minsang Kim;Daehong Kim;Yunho Jung
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.397-404
    • /
    • 2024
  • This paper presents the design and implementation results of a Raspberry-Pi-based embedded system with an FPGA accelerator that can detect and classify objects using an FMCW radar sensor for preventing door collision accidents in vehicles. The proposed system performs a radar sensor signal processing and a deep learning processing that classifies objects into bicycles, automobiles, and pedestrians. Since the CNN algorithm requires substantial computation and memory, it is not suitable for embedded systems. To address this, we implemented a lightweight deep learning model, BNN, optimized for embedded systems on an FPGA, and verified the results achieving a classification accuracy of 90.33% and an execution time of 20ms.

Convergence CCTV camera embedded with Deep Learning SW technology (딥러닝 SW 기술을 이용한 임베디드형 융합 CCTV 카메라)

  • Son, Kyong-Sik;Kim, Jong-Won;Lim, Jae-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.1
    • /
    • pp.103-113
    • /
    • 2019
  • License plate recognition camera is dedicated device designed for acquiring images of the target vehicle for recognizing letters and numbers in a license plate. Mostly, it is used as a part of the system combined with server and image analysis module rather than as a single use. However, building a system for vehicle license plate recognition is costly because it is required to construct a facility with a server providing the management and analysis of the captured images and an image analysis module providing the extraction of numbers and characters and recognition of the vehicle's plate. In this study, we would like to develop an embedded type convergent camera (Edge Base) which can expand the function of the camera to not only the license plate recognition but also the security CCTV function together and to perform two functions within the camera. This embedded type convergence camera equipped with a high resolution 4K IP camera for clear image acquisition and fast data transmission extracted license plate area by applying YOLO, a deep learning software for multi object recognition based on open source neural network algorithm and detected number and characters of the plate and verified the detection accuracy and recognition accuracy and confirmed that this camera can perform CCTV security function and vehicle number plate recognition function successfully.

Research of the Face Extract Algorithm from Road Side Images Obtained by vehicle (차량에서 획득된 도로 주변 영상에서의 얼굴 추출 방안 연구)

  • Rhee, Soo-Ahm;Kim, Tae-Jung;Kim, Moon-Gie;Yun, Duk-Geun;Sung, Jung-Gon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2008
  • The face extraction is very important to provide the images of the roads and road sides without the problem of privacy. For face extraction form roadside images, we detected the skin color area by using HSI and YCrCb color models. Efficient skin color detection was achieved by using these two models. We used a connectivity and intensity difference for grouping, skin color regions further we applied shape conditions (rate, area, number and oval condition) and determined face candidate regions. We applied thresholds to region, and determined the region as the face if black part was over 5% of the whole regions. As the result of the experiment 28 faces has been extracted among 38 faces had problem of privacy. The reasons which the face was not extracted were the effect of shadow of the face, and the background objects. Also objects with the color similar to the face were falsely extracted. For improvement, we need to adjust the threshold.

  • PDF

Case Study: Cost-effective Weed Patch Detection by Multi-Spectral Camera Mounted on Unmanned Aerial Vehicle in the Buckwheat Field

  • Kim, Dong-Wook;Kim, Yoonha;Kim, Kyung-Hwan;Kim, Hak-Jin;Chung, Yong Suk
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.2
    • /
    • pp.159-164
    • /
    • 2019
  • Weed control is a crucial practice not only in organic farming, but also in modern agriculture because it can lead to loss in crop yield. In general, weed is distributed in patches heterogeneously in the field. These patches vary in size, shape, and density. Thus, it would be efficient if chemicals are sprayed on these patches rather than spraying uniformly in the field, which can pollute the environment and be cost prohibitive. In this sense, weed detection could be beneficial for sustainable agriculture. Studies have been conducted to detect weed patches in the field using remote sensing technologies, which can be classified into a method using image segmentation based on morphology and a method with vegetative indices based on the wavelength of light. In this study, the latter methodology has been used to detect the weed patches. As a result, it was found that the vegetative indices were easier to operate as it did not need any sophisticated algorithm for differentiating weeds from crop and soil as compared to the former method. Consequently, we demonstrated that the current method of using vegetative index is accurate enough to detect weed patches, and will be useful for farmers to control weeds with minimal use of chemicals and in a more precise manner.

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors (경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식)

  • Jung, Dongkyu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1453-1461
    • /
    • 2022
  • The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.

VERTICAL OZONE DENSITY PROFILING BY UV RADIOMETER ONBOARD KSR-III

  • Hwang Seung-Hyun;Kim Jhoon;Lee Soo-Jin;Kim Kwang-Soo;Ji Ki-Man;Shin Myung-Ho;Chung Eui-Seung
    • Bulletin of the Korean Space Science Society
    • /
    • 2004.10b
    • /
    • pp.372-375
    • /
    • 2004
  • The UV radiometer payload was launched successfully from the west coastal area of Korea Peninsula aboard KSR-III on 28, Nov 2002. KSR-III was the Korean third generation sounding rocket and was developed as intermediate step to larger space launch vehicle with liquid propulsion engine system. UV radiometer onboard KSR-III consists of UV and visible band optical phototubes to measure the direct solar attenuation during rocket ascending phase. For UV detection, 4 channel of sensors were installed in electronics payload section and each channel has 255, 290, 310nm center wavelengths, respectively. 450nm channel was used as reference for correction of the rocket attitude during the flight. Transmission characteristics of all channels were calibrated precisely prior to the flight test at the Optical Lab. in KARI (Korea Aerospace Research Institute). During a total of 231s flight time, the onboard data telemetered to the ground station in real time. The ozone column density was calculated by this telemetry raw data. From the calculated column density, the vertical ozone profile over Korea Peninsula was obtained with sensor calibration data. Our results had reasonable agreements compared with various observations such as ground Umkhr measurement at Yonsei site, ozonesonde at Pohang site, and satellite measurements of HALOE and POAM. The sensitivity analysis of retrieval algorithm for parameters was performed and it was provided that significant error sources of the retrieval algorithm.

  • PDF