• Title/Summary/Keyword: multiple sensor fusion

Search Result 92, Processing Time 0.028 seconds

Implementation of Deep-sea UUV Precise Underwater Navigation based on Multiple Sensor Fusion (다중센서융합 기반의 심해무인잠수정 정밀수중항법 구현)

  • Kim, Ki-Hun;Choi, Hyun-Taek;Kim, Sea-Moon;Lee, Pan-Mook;Lee, Chong-Moo;Cho, Seong-Kwon
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.3
    • /
    • pp.46-51
    • /
    • 2010
  • This paper describes the implementation of a precise underwater navigation solution using a multi-sensor fusion technique based on USBL, DVL, and IMU measurements. To implement this precise underwater navigation solution, three strategies are chosen. The first involves heading alignment angle identification to enhance the performance of a standalone dead-reckoning algorithm. In the second, the absolute position is found quickly to prevent the accumulation of integration error. The third one is the introduction of an effective outlier rejection algorithm. The performance of the developed algorithm was verified with experimental data acquired by the deep-sea ROV, Hemire, in the East-sea during a survey of a methane gas seepage area at a 1,500 m depth.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

Multiple Vehicle Recognition based on Radar and Vision Sensor Fusion for Lane Change Assistance (차선 변경 지원을 위한 레이더 및 비전센서 융합기반 다중 차량 인식)

  • Kim, Heong-Tae;Song, Bongsob;Lee, Hoon;Jang, Hyungsun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • This paper presents a multiple vehicle recognition algorithm based on radar and vision sensor fusion for lane change assistance. To determine whether the lane change is possible, it is necessary to recognize not only a primary vehicle which is located in-lane, but also other adjacent vehicles in the left and/or right lanes. With the given sensor configuration, two challenging problems are considered. One is that the guardrail detected by the front radar might be recognized as a left or right vehicle due to its genetic characteristics. This problem can be solved by a guardrail recognition algorithm based on motion and shape attributes. The other problem is that the recognition of rear vehicles in the left or right lanes might be wrong, especially on curved roads due to the low accuracy of the lateral position measured by rear radars, as well as due to a lack of knowledge of road curvature in the backward direction. In order to solve this problem, it is proposed that the road curvature measured by the front vision sensor is used to derive the road curvature toward the rear direction. Finally, the proposed algorithm for multiple vehicle recognition is validated via field test data on real roads.

Implementation of Multiple Sensor Data Fusion Algorithm for Fire Detection System

  • Park, Jung Kyu;Nam, Kihun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.7
    • /
    • pp.9-16
    • /
    • 2020
  • In this paper, we propose a prototype design and implementation of a fire detection algorithm using multiple sensors. The proposed topic detection system determines fire by applying rules based on data from multiple sensors. The fire takes about 3 to 5 minutes, which is the optimal time for fire detection. This means that timely identification of potential fires is important for fire management. However, current fire detection devices are very vulnerable to false alarms because they rely on a single sensor to detect smoke or heat. Recently, with the development of IoT technology, it is possible to integrate multiple sensors into a fire detector. In addition, the fire detector has been developed with a smart technology that can communicate with other objects and perform programmed tasks. The prototype was produced with a success rate of 90% and a false alarm rate of 10% based on 10 actual experiments.

Navigation System of UUV Using Multi-Sensor Fusion-Based EKF (융합된 다중 센서와 EKF 기반의 무인잠수정의 항법시스템 설계)

  • Park, Young-Sik;Choi, Won-Seok;Han, Seong-Ik;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.7
    • /
    • pp.562-569
    • /
    • 2016
  • This paper proposes a navigation system with a robust localization method for an underwater unmanned vehicle. For robust localization with IMU (Inertial Measurement Unit), a DVL (Doppler Velocity Log), and depth sensors, the EKF (Extended Kalman Filter) has been utilized to fuse multiple nonlinear data. Note that the GPS (Global Positioning System), which can obtain the absolute coordinates of the vehicle, cannot be used in the water. Additionally, the DVL has been used for measuring the relative velocity of the underwater vehicle. The DVL sensor measures the velocity of an object by using Doppler effects, which cause sound frequency changes from the relative velocity between a sound source and an observer. When the vehicle is moving, the motion trajectory to a target position can be recorded by the sensors attached to the vehicle. The performance of the proposed navigation system has been verified through real experiments in which an underwater unmanned vehicle reached a target position by using an IMU as a primary sensor and a DVL as the secondary sensor.

Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments (키넥트 센서를 이용한 동적 환경에서의 효율적인 이동로봇 반응경로계획 기법)

  • Tuvshinjargal, Doopalam;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.6
    • /
    • pp.549-559
    • /
    • 2015
  • In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.

Locality Aware Multi-Sensor Data Fusion Model for Smart Environments (장소인식멀티센서스마트 환경을위한 데이터 퓨전 모델)

  • Nawaz, Waqas;Fahim, Muhammad;Lee, Sung-Young;Lee, Young-Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.78-80
    • /
    • 2011
  • In the area of data fusion, dealing with heterogeneous data sources, numerous models have been proposed in last three decades to facilitate different application domains i.e. Department of Defense (DoD), monitoring of complex machinery, medical diagnosis and smart buildings. All of these models shared the theme of multiple levels processing to get more reliable and accurate information. In this paper, we consider five most widely acceptable fusion models (Intelligence Cycle, Joint Directors of Laboratories, Boyd control, Waterfall, Omnibus) applied to different areas for data fusion. When they are exposed to a real scenario, where large dataset from heterogeneous sources is utilize for object monitoring, then it may leads us to non-efficient and unreliable information for decision making. The proposed variation works better in terms of time and accuracy due to prior data diminution.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Sonar-Based Certainty Grids for Autonomous Mobile Robots (초음파 센서을 이용한 자율 이동 로봇의 써튼티 그리드 형성)

  • 임종환;조동우
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.4
    • /
    • pp.386-392
    • /
    • 1990
  • This paper discribes a sonar-based certainty grid, the probabilistic representation of the uncertain and incomplete sensor knowledge, for autonomous mobile robot navigation. We use sonar sensor range data to build a map of the robot's surroundings. This range data provides information about the location of the objects which may exist in front of the sensor. From this information, we can compute the probability of being occupied and that of being empty for each cell. In this paper, a new method using Bayesian formula is introduced, which enables us to overcome some difficulties of the Ad-Hoc formula that has been the only way of updating the grids. This new formula can be applied to other kinds of sensors as well as sonar sensor. The validity of this formula in the real world is verified through simulation and experiment. This paper also shows that a wide angle sensor such as sonar sensor can be used effectively to identify the empty area, and the simultaneous use of multiple sensors and fusion in a certainty grid can improve the quality of the map.

  • PDF

Distributed Estimation Using Non-regular Quantized Data

  • Kim, Yoon Hak
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.1
    • /
    • pp.7-13
    • /
    • 2017
  • We consider a distributed estimation where many nodes remotely placed at known locations collect the measurements of the parameter of interest, quantize these measurements, and transmit the quantized data to a fusion node; this fusion node performs the parameter estimation. Noting that quantizers at nodes should operate in a non-regular framework where multiple codewords or quantization partitions can be mapped from a single measurement to improve the system performance, we propose a low-weight estimation algorithm that finds the most feasible combination of codewords. This combination is found by computing the weighted sum of the possible combinations whose weights are obtained by counting their occurrence in a learning process. Otherwise, tremendous complexity will be inevitable due to multiple codewords or partitions interpreted from non-regular quantized data. We conduct extensive experiments to demonstrate that the proposed algorithm provides a statistically significant performance gain with low complexity as compared to typical estimation techniques.