• 제목/요약/키워드: Real-time data fusion

검색결과 126건 처리시간 0.026초

Vision Sensor and Ultrasonic Sensor Fusion Using Neural Network

  • Baek, Sang-Hoon;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.668-671
    • /
    • 2004
  • This paper proposes a new method of sensor fusion of an ultrasonic sensor and a vision sensor at the sensor level. In general vision system, the vision system finds edges of objects. And in general ultrasonic system, the ultrasonic system finds absolute distance between robot and object. So, the method integrates data of two different types. The system makes perfect output for robot control in the end. But this paper does not propose only integrating a different kind of data but also fusion information which receives from different kind of sensors. This method has advantages which can simply embody algorithm and can control robot on real time.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제11권1호
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

비동기 다중 레이더 융합을 통한 실시간 궤도 추정 알고리즘 (Real time orbit estimation using asynchronous multiple RADAR data fusion)

  • 송하룡;문병진;조동현
    • 항공우주기술
    • /
    • 제13권2호
    • /
    • pp.66-72
    • /
    • 2014
  • 본 논문에서는 비동기 다중 레이더의 추적데이터 융합을 통한 우주 물체 추적 알고리즘을 소개하였다. 지구저궤도에 분포되어 있는 우주 물체 추적을 위하여 다중의 레이더를 사용한 추적 시나리오를 설정하였고, 각 레이더의 우주 물체 추적을 위하여 선형화 칼만 필터를 사용하였다. 샘플링 시간이 서로 다른 다중 레이더의 데이터를 융합하기 위해서 각각의 레이더에서 측정 가능한 범위를 STK/ODTK를 사용하여 결정하고, 다중 레이더가 동시에 우주 물체를 추적 하는 시간 동안 칼만 필터 기반의 비동기 융합 알고리즘을 적용하여 우주 물체의 궤도를 추정하였으며, 시뮬레이션을 통해 다중 레이더 융합을 통한 궤도 추정의 성능을 분석하였다.

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험 (Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests)

  • 박은성;유창호;최재원
    • 제어로봇시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

Sound System Analysis for Health Smart Home

  • CASTELLI Eric;ISTRATE Dan;NGUYEN Cong-Phuong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2004년도 ICEIC The International Conference on Electronics Informations and Communications
    • /
    • pp.237-243
    • /
    • 2004
  • A multichannel smart sound sensor capable to detect and identify sound events in noisy conditions is presented in this paper. Sound information extraction is a complex task and the main difficulty consists is the extraction of high­level information from an one-dimensional signal. The input of smart sound sensor is composed of data collected by 5 microphones and its output data is sent through a network. For a real time working purpose, the sound analysis is divided in three steps: sound event detection for each sound channel, fusion between simultaneously events and sound identification. The event detection module find impulsive signals in the noise and extracts them from the signal flow. Our smart sensor must be capable to identify impulsive signals but also speech presence too, in a noisy environment. The classification module is launched in a parallel task on the channel chosen by data fusion process. It looks to identify the event sound between seven predefined sound classes and uses a Gaussian Mixture Model (GMM) method. Mel Frequency Cepstral Coefficients are used in combination with new ones like zero crossing rate, centroid and roll-off point. This smart sound sensor is a part of a medical telemonitoring project with the aim of detecting serious accidents.

  • PDF

실시간 교통정보 정확도 향상을 위한 이질적 교통정보 융합 연구 (Fusion Strategy on Heterogeneous Information Sources for Improving the Accuracy of Real-Time Traffic Information)

  • 김종진;정연식
    • 대한토목학회논문집
    • /
    • 제42권1호
    • /
    • pp.67-74
    • /
    • 2022
  • 최근 높은 스마트폰 보급율과 ITS (intelligent transportation systems) 인프라 확충 등 정보통신기술(information and communications technology, ICT) 이용 활성화로 실시간 교통정보의 수집원이 증가하였다. 이렇게 다양하게 수집되는 실시간 교통정보의 정확도는 VDS(vehicle detection system), DSRC (dedicated short-range communications), GPS (global positioning system) probe와 같은 다양한 교통정보 수집원별 시공간 혹은 교통상황 등 다양한 환경에 따라 다르게 나타날 수 있다. 본 연구의 목적은 이질적 교통정보가 동시에 수집될 경우, 실시간 교통정보의 정확도를 향상시키기 위한 융합 전략의 제시에 있다. 이를 위해 고속국도(892.2 km, 227개 링크), 일반국도(937.0 km, 2,074개 링크)를 대상으로 주행 조사를 실시하였으며, 해당 링크 및 시간대에 probe 차량 5대의 평균 통행속도는 실시간 교통정보 수집원별(VDS or DSRC, GPS-based A, B) 정확도 평가의 기준 혹은 참값으로 활용되었다. 결과적으로 제시된 융합 전략에 대한 정확도 개선 효과는 일반국도에서 1개 수집원을 제외하고 모두 통계적으로 유의한 것으로 나타났으며, 향후 다양한 기관으로부터 서비스되는 실시간 교통정보가 동시에 연계되는 환경에서 보다 정확한 교통정보 서비스의 가능성을 확인하였다.

IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구 (A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study)

  • 류홍남;김종훈;류경모;홍주영;최병욱
    • 조명전기설비학회논문지
    • /
    • 제29권11호
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

자율주행을 위한 센서 데이터 융합 기반의 맵 생성 (Map Building Based on Sensor Fusion for Autonomous Vehicle)

  • 강민성;허수정;박익현;박용완
    • 한국자동차공학회논문집
    • /
    • 제22권6호
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.