• Title/Summary/Keyword: time sensor fusion

Search Result 216, Processing Time 0.029 seconds

Space and Time Sensor Fusion Using an Active Camera For Mobile Robot Navigation

  • Jin, Tae-Seok;Lee, Bong-Ki;Park, Soo-Min;Lee, Kwon-Soon;Lee, Jang-Myung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2002.11a
    • /
    • pp.127-132
    • /
    • 2002
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations. finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in an unstructured environment as well as structured environment.

  • PDF

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Tracking of ARPA Radar Signals Based on UK-PDAF and Fusion with AIS Data

  • Chan Woo Han;Sung Wook Lee;Eun Seok Jin
    • Journal of Ocean Engineering and Technology
    • /
    • v.37 no.1
    • /
    • pp.38-48
    • /
    • 2023
  • To maintain the existing systems of ships and introduce autonomous operation technology, it is necessary to improve situational awareness through the sensor fusion of the automatic identification system (AIS) and automatic radar plotting aid (ARPA), which are installed sensors. This study proposes an algorithm for determining whether AIS and ARPA signals are sent to the same ship in real time. To minimize the number of errors caused by the time series and abnormal phenomena of heterogeneous signals, a tracking method based on the combination of the unscented Kalman filter and probabilistic data association filter is performed on ARPA radar signals, and a position prediction method is applied to AIS signals. Especially, the proposed algorithm determines whether the signal is for the same vessel by comparing motion-related components among data of heterogeneous signals to which the corresponding method is applied. Finally, a measurement test is conducted on a training ship. In this process, the proposed algorithm is validated using the AIS and ARPA signal data received by the voyage data recorder for the same ship. In addition, the proposed algorithm is verified by comparing the test results with those obtained from raw data. Therefore, it is recommended to use a sensor fusion algorithm that considers the characteristics of sensors to improve the situational awareness accuracy of existing ship systems.

A Study on Real-time Monitoing of Tool Fracture in Turning (선삭공정시 공구파손의 실시간 검출에 관한 연구)

  • Park, D.K.;Chu, C.N.;Lee, J.M.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.3
    • /
    • pp.130-143
    • /
    • 1995
  • This paper presents a new methodology for on-line tool breadage detection by sensor fusion of an acoustic emission (AE) sensor and a built-in force sensor. A built-in piezoelectric force sensor, instead of a tool dynamometer, was used to measure the cutting force without altering the machine tool dynamics. The sensor was inserted in the tool turret housing of an NC lathe. FEM analysis was carried out to locate the most sensitive position for the sensor. A burst of AE signal was used as a triggering signal to inspect the cutting force. A sighificant drop of cutting force was utilized to detect tool breakage. The algorithm was implemented on a DSP board for in-process tool breakage detection. Experiental works showed an excellent monitoring capability of the proposed tool breakage detection system.

  • PDF

Improvement of Dynamic Respiration Monitoring Through Sensor Fusion of Accelerometer and Gyro-sensor

  • Yoon, Ja-Woong;Noh, Yeon-Sik;Kwon, Yi-Suk;Kim, Won-Ki;Yoon, Hyung-Ro
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.1
    • /
    • pp.334-343
    • /
    • 2014
  • In this paper, we suggest a method to improve the fusion of an accelerometer and gyro sensor by using a Kalman filter to produce a more high-quality respiration signal to supplement the weakness of using a single accelerometer. To evaluate our proposed algorithm's performance, we developed a chest belt-type module. We performed experiments consisting of aerobic exercise and muscular exercises with 10 subjects. We compared the derived respiration signal from the accelerometer with that from our algorithm using the standard respiration signal from the piezoelectric sensor in the time and frequency domains during the aerobic and muscular exercises. We also analyzed the time delay to verify the synchronization between the output and standard signals. We confirmed that our algorithm improved the respiratory rate's detection accuracy by 4.6% and 9.54% for the treadmill and leg press, respectively, which are dynamic. We also confirmed a small time delay of about 0.638 s on average. We determined that real-time monitoring of the respiration signal is possible. In conclusion, our suggested algorithm can acquire a more high-quality respiration signal in a dynamic exercise environment away from a limited static environment to provide safer and more effective exercises and improve exercise sustainability.

Map Building Based on Sensor Fusion for Autonomous Vehicle (자율주행을 위한 센서 데이터 융합 기반의 맵 생성)

  • Kang, Minsung;Hur, Soojung;Park, Ikhyun;Park, Yongwan
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

SWNT-UHF Fusion Sensor for GIS Partial Discharge Detection (가스절연기기의 부분방전검출을 위한 SWNT-UHF 융합센서)

  • Lee, Sang-Wook;Chang, Yong-Moo;Baik, Seung-Hyun;Lee, Jong-Chul
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2010.06a
    • /
    • pp.120-120
    • /
    • 2010
  • To detect the PD events, we have studied a fusion sensor, the UHF sensor and the single-walled carbon nanotube(SWNT) gas sensor. We are accustomed to the UHF sensor which have employed to detect the partial discharges in apparatus GIS-like. But the SWNT gas sense is a newly way proposed to detect the partial discharges. In this study, we monitored not only the changes of the electrical conductance of the SWNT sensors in responding to the PD events but also the signal of the UHF sensor at the same time with IEC 60270 standard method for reference on the partial discharge events.

  • PDF

A Study on Orientation and Position Control of Mobile Robot Based on Multi-Sensors Fusion for Implimentation of Smart FA (스마트팩토리 실현을 위한 다중센서기반 모바일로봇의 위치 및 자세제어에 관한 연구)

  • Dong, G.H;Kim, D.B.;Kim, H.J;Kim, S.H;Baek, Y.T;Han, S.H
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.2
    • /
    • pp.209-218
    • /
    • 2019
  • This study proposes a new approach to Control the Orientation and position based on obstacle avoidance technology by multi sensors fusion and autonomous travelling control of mobile robot system for implimentation of Smart FA. The important focus is to control mobile robot based on by the multiple sensor module for autonomous travelling and obstacle avoidance of proposed mobile robot system, and the multiple sensor module is consit with sonar sensors, psd sensors, color recognition sensors, and position recognition sensors. Especially, it is proposed two points for the real time implementation of autonomous travelling control of mobile robot in limited manufacturing environments. One is on the development of the travelling trajectory control algorithm which obtain accurate and fast in considering any constraints. such as uncertain nonlinear dynamic effects. The other is on the real time implementation of obstacle avoidance and autonomous travelling control of mobile robot based on multiple sensors. The reliability of this study has been illustrated by the computer simulation and experiments for autonomous travelling control and obstacle avoidance.

Short Range Target Tracking Based on Data Fusion Method Using Asynchronous Dissimilar Sensors (비동기 이종 센서를 이용한 데이터 융합기반 근거리 표적 추적기법)

  • Lee, Eui-Hyuk
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.335-343
    • /
    • 2012
  • This paper presents an target tracking algorithm for fusion of radar and infrared(IR) sensor measurement data. Generally, fusion methods with Kalman filter assume that processing data obtained by radar and IR sensor are synchronized. It has much limitation to apply the fusion methods to real systems. A key point which is taken into account in the proposed algorithm is the fact that two asynchronous dissimilar data are fused by compensating the time difference of the measurements using radar's ranges and track state vectors. The proposed fusion algorithm in the paper is evaluated via a computer simulation with the existing track fusion and measurement fusion methods.

Lane Information Fusion Scheme using Multiple Lane Sensors (다중센서 기반 차선정보 시공간 융합기법)

  • Lee, Soomok;Park, Gikwang;Seo, Seung-woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.142-149
    • /
    • 2015
  • Most of the mono-camera based lane detection systems are fragile on poor illumination conditions. In order to compensate limitations of single sensor utilization, lane information fusion system using multiple lane sensors is an alternative to stabilize performance and guarantee high precision. However, conventional fusion schemes, which only concerns object detection, are inappropriate to apply to the lane information fusion. Even few studies considering lane information fusion have dealt with limited aids on back-up sensor or omitted cases of asynchronous multi-rate and coverage. In this paper, we propose a lane information fusion scheme utilizing multiple lane sensors with different coverage and cycle. The precise lane information fusion is achieved by the proposed fusion framework which considers individual ranging capability and processing time of diverse types of lane sensors. In addition, a novel lane estimation model is proposed to synchronize multi-rate sensors precisely by up-sampling spare lane information signals. Through quantitative vehicle-level experiments with around view monitoring system and frontal camera system, we demonstrate the robustness of the proposed lane fusion scheme.