• Title/Summary/Keyword: Sensor fusion

Search Result 818, Processing Time 0.026 seconds

A Study on the Sensor Fusion Method to Improve Localization of a Mobile Robot (이동로봇의 위치추정 성능개선을 위한 센서융합기법에 관한 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Kong, Jung-Shik;Jang, Mun-Suk;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.317-318
    • /
    • 2007
  • One of the important factors of the autonomous mobile robot is to build a map for surround environment and estimate its localization. This paper suggests a sensor fusion method of laser range finder and monocular vision sensor for the simultaneous localization and map building. The robot observes the comer points in the environment as features using the laser range finder, and extracts the SIFT algorithm with the monocular vision sensor. We verify the improved localization performance of the mobile robot from the experiment.

  • PDF

Selection and Allocation of Point Data with Wavelet Transform in Reverse Engineering (역공학에서 웨이브렛 변황을 이용한 점 데이터의 선택과 할당)

  • Ko, Tae-Jo;Kim, Hee-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.9
    • /
    • pp.158-165
    • /
    • 2000
  • Reverse engineering is reproducing products by directly extracting geometric information from physical objects such as clay model wooden mock-up etc. The fundamental work in the reverse engineering is to acquire the geometric data for modeling the objects. This research proposes a novel method for data acquisition aiming at unmanned fast and precise measurement. This is come true by the sensor fusion with CCD camera using structured light beam and touch trigger sensor. The vision system provides global information of the objects data. In this case the number of data and position allocation for touch sensor is critical in terms of the productivity since the number of vision data is very huge. So we applied wavelet transform to reduce the number of data and to allocate the position of the touch probe. The simulated and experimental results show this method is good enough for data reduction.

  • PDF

The Performance Analysis of MPDA in Out of Sequence Measurement Environment (Out of Sequence Measurement 환경에서의 MPDA 성능 분석)

  • Seo, Il-Hwan;Lim, Young-Taek;Song, Taek-Lyul
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.9
    • /
    • pp.401-408
    • /
    • 2006
  • In a multi-sensor multi-target tracking systems, the local sensors have the role of tracking the target and transferring the measurements to the fusion center. The measurements from the same target can arrive out of sequence called the out-of-sequence measurements(OOSMs). Out-of-sequence measurements can arise at the fusion center due to communication delay and varying preprocessing time for different sensor platforms. In general, the track fusion occurs to enhance the tracking performance of the sensors using the measurements from the sensors at the fusion center. The target informations can wive at the fusion center with the clutter informations in cluttered environment. In this paper, the OOSM update step with MPDA(Most Probable Data Association) is introduced and tested in several cases with the various clutter density through the Monte Carlo simulation. The performance of the MPDA with OOSM update step is compared with the existing NN, PDA, and PDA-AI for the air target tracking in cluttered and out-of-sequence measurement environment. Simulation results show that MPDA with the OOSM has compatible root mean square errors with out-of-sequence PDA-AI filter and the MPDA is sufficient to be used in out-of-sequence environment.

Multi-sensor Fusion based Autonomous Return of SUGV (다중센서 융합기반 소형로봇 자율복귀에 대한 연구)

  • Choi, Ji-Hoon;Kang, Sin-Cheon;Kim, Jun;Shim, Sung-Dae;Jee, Tae-Yong;Song, Jae-Bok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.250-256
    • /
    • 2012
  • Unmanned ground vehicles may be operated by remote control unit through the wireless communication or autonomously. However, the autonomous technology is still challenging and not perfectly developed. For some reason or other, the wireless communication is not always available. If wireless communication is abruptly disconnected, the UGV will be nothing but a lump of junk. What was worse, the UGV can be captured by enemy. This paper suggests a method, autonomous return technology with which the UGV can autonomously go back to a safer position along the reverse path. The suggested autonomous return technology for UGV is based on multi-correlated information based DB creation and matching. While SUGV moves by remote-control, the multi-correlated information based DB is created with the multi-sensor information; the absolute position of the trajectory is stored in DB if GPS is available and the hybrid MAP based on the fusion of VISION and LADAR is stored with the corresponding relative position if GPS is unavailable. In multi-correlated information based autonomous return, SUGV returns autonomously based on DB; SUGV returns along the trajectory based on GPS-based absolute position if GPS is available. Otherwise, the current position of SUGV is first estimated by the relative position using multi-sensor fusion followed by the matching between the query and DB. Then, the return path is created in MAP and SUGV returns automatically based on the MAP. Experimental results on the pre-built trajectory show the possibility of the successful autonomous return.

Pose Control of Mobile Inverted Pendulum using Gyro-Accelerometer (자이로-가속도센서를 이용한 모바일 역진자의 자세 제어)

  • Kang, Jin-Gu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.129-136
    • /
    • 2010
  • In this paper proposed the sensor fusion algorithm between a gyroscope and an accelerometer to maintain the inverted posture with two wheels which can make the robot body move to the desired destination. Mobile inverted robot fall down to the forward or reverse direction to converge to the stable point. Therefore, precise information of tilt angles and quick posture control by using the information are necessary to maintain the inverted posture, hence this paper proposed the sensor fusion algorithm between a gyroscope to obtain the angular velocity and a accelerometer to compensate for the gyroscope. Kalman Filter is normally used for the algorithm and numerous research is progressing at the moment. However, a high-performing DSP and systems are needed for the algorithm. This paper realized the robot control method which is much simpler but able to get desired performance by using the sensor fusion algorithm and PID control.

Image Fusion of High Resolution SAR and Optical Image Using High Frequency Information (고해상도 SAR와 광학영상의 고주파 정보를 이용한 다중센서 융합)

  • Byun, Young-Gi;Chae, Tae-Byeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.75-86
    • /
    • 2012
  • Synthetic Aperture Radar(SAR) imaging system is independent of solar illumination and weather conditions; however, SAR image is difficult to interpret as compared with optical images. It has been increased interest in multi-sensor fusion technique which can improve the interpretability of $SAR^{\circ\circ}$ images by fusing the spectral information from multispectral(MS) image. In this paper, a multi-sensor fusion method based on high-frequency extraction process using Fast Fourier Transform(FFT) and outlier elimination process is proposed, which maintain the spectral content of the original MS image while retaining the spatial detail of the high-resolution SAR image. We used TerraSAR-X which is constructed on the same X-band SAR system as KOMPSAT-5 and KOMPSAT-2 MS image as the test data set to evaluate the proposed method. In order to evaluate the efficiency of the proposed method, the fusion result was compared visually and quantitatively with the result obtained using existing fusion algorithms. The evaluation results showed that the proposed image fusion method achieved successful results in the fusion of SAR and MS image compared with the existing fusion algorithms.

A Study on Environmental Micro-Dust Level Detection and Remote Monitoring of Outdoor Facilities

  • Kim, Seung Kyun;Mariappan, Vinayagam;Cha, Jae Sang
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.63-69
    • /
    • 2020
  • The rapid development in modern industrialization pollutant the water and atmospheric air across the globe that have a major impact on the human and livings health. In worldwide, every country government increasing the importance to improve the outdoor air pollution monitoring and control to provide quality of life and prevent the citizens and livings life from hazard disease. We proposed the environmental dust level detection method for outdoor facilities using sensor fusion technology to measure precise micro-dust level and monitor in realtime. In this proposed approach use the camera sensor and commercial dust level sensor data to predict the micro-dust level with data fusion method. The camera sensor based dust level detection uses the optical flow based machine learning method to detect the dust level and then fused with commercial dust level sensor data to predict the precise micro-dust level of the outdoor facilities and send the dust level informations to the outdoor air pollution monitoring system. The proposed method implemented on raspberry pi based open-source hardware with Internet-of-Things (IoT) framework and evaluated the performance of the system in realtime. The experimental results confirm that the proposed micro-dust level detection is precise and reliable in sensing the air dust and pollution, which helps to indicate the change in the air pollution more precisely than the commercial sensor based method in some extent.

Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments (키넥트 센서를 이용한 동적 환경에서의 효율적인 이동로봇 반응경로계획 기법)

  • Tuvshinjargal, Doopalam;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.6
    • /
    • pp.549-559
    • /
    • 2015
  • In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.