• Title/Summary/Keyword: Sensor fusion

Search Result 815, Processing Time 0.029 seconds

Multiple Sensor Fusion Algorithm for the Altitude Estimation of Deep-Sea UUV, HEMIRE (심해무인잠수정 해미래의 고도정보 추정을 위한 다중센서융합 알고리즘)

  • Kim, Dug-Jin;Kim, Ki-Hun;Lee, Pan-Mook;Cho, Sung-Kwon;Park, Yeoun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1202-1208
    • /
    • 2008
  • This paper represents the multiple sensor fusion algorithm for the deep-sea unmanned underwater vehicles (UUV), composed of a remotely operated vehicle (ROV) 'Hemire' and a depressor 'Henuvy'. The performance of underwater positioning system usually highly depend on that of acoustic sensors such as ultra short base line(USBL), long base line(LBL) and altimeter. A practical sensor fusion algorithm is proposed in the sense of a moving window concept. The performance of the proposed algorithm can be observed by applying the algorithm to the Hemire sea trial data which was measured at the East Sea.

Parking Space Detection based on Camera and LIDAR Sensor Fusion (카메라와 라이다 센서 융합에 기반한 개선된 주차 공간 검출 시스템)

  • Park, Kyujin;Im, Gyubeom;Kim, Minsung;Park, Jaeheung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.170-178
    • /
    • 2019
  • This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.

A Study on Orientation and Position Control of Mobile Robot Based on Multi-Sensors Fusion for Implimentation of Smart FA (스마트팩토리 실현을 위한 다중센서기반 모바일로봇의 위치 및 자세제어에 관한 연구)

  • Dong, G.H;Kim, D.B.;Kim, H.J;Kim, S.H;Baek, Y.T;Han, S.H
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.2
    • /
    • pp.209-218
    • /
    • 2019
  • This study proposes a new approach to Control the Orientation and position based on obstacle avoidance technology by multi sensors fusion and autonomous travelling control of mobile robot system for implimentation of Smart FA. The important focus is to control mobile robot based on by the multiple sensor module for autonomous travelling and obstacle avoidance of proposed mobile robot system, and the multiple sensor module is consit with sonar sensors, psd sensors, color recognition sensors, and position recognition sensors. Especially, it is proposed two points for the real time implementation of autonomous travelling control of mobile robot in limited manufacturing environments. One is on the development of the travelling trajectory control algorithm which obtain accurate and fast in considering any constraints. such as uncertain nonlinear dynamic effects. The other is on the real time implementation of obstacle avoidance and autonomous travelling control of mobile robot based on multiple sensors. The reliability of this study has been illustrated by the computer simulation and experiments for autonomous travelling control and obstacle avoidance.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Distance estimation from ground for small VTOL UAV landing (소형 VTOL UAV 이착륙을 위한 지면과의 거리 추정)

  • Yun, Byoung-Min;Kim, Sang-Won;Cho, Sun-Ho;Park, Chong-Kug
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.59-61
    • /
    • 2004
  • For automatic landing of small VTOL UAV, it is necessary to calculate the distance from the UAV and the ground. The distance can be generally measured by a ultra-sonic sensor, but the ultra-sonic sensor has errors according to velocity of a sensor board. To compensate these errors, we proposed a sensor fusion method using a Kalman filter.

  • PDF

Precise assembly task using sensor fusion technology (센서퓨젼 기술을 이용한 정밀조립작업)

  • 이종길;이범희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.287-292
    • /
    • 1993
  • We use three sensors such as a vision sensor, a proximity sensor, and a force/torque sensor fused by fuzzy logic in a peg-in-hole task. The vision and proximity sensors are usually used for gross motion control and the information is used here to position the peg around the hole. The force/torque sensor is used for fine motion control and the information is used to insert the peg into the hole precisely. Throughout the task, the information of all the three sensors is fused by a fuzzy logic controller. Some simulation results are also presented for verification.

  • PDF

Real Time Motion Processing for Autonomous Navigation

  • Kolodko, J.;Vlacic, L.
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.156-161
    • /
    • 2003
  • An overview of our approach to autonomous navigation is presented showing how motion information can be integrated into existing navigation schemes. Particular attention is given to our short range motion estimation scheme which utilises a number of unique assumptions regarding the nature of the visual environment allowing a direct fusion of visual and range information. Graduated non-convexity is used to solve the resulting non-convex minimisation problem. Experimental results show the advantages of our fusion technique.

the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas (도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합)

  • 강준묵;강영미;이형석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

Development of a system architecture for an advanced autonomous underwater vehicle, ORCA

  • Choi, Hyun-Taek;Lee, Pan-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1791-1796
    • /
    • 2004
  • Recently, great improvements have been made in developing autonomous underwater vehicles (AUVs) using stateof- the-art technologies for various kinds of sophisticated underwater missions. To meet increasing demands posed on AUVs, a powerful on-board computer system and an accurate sensor system with an well-organized control system architecture are needed. In this paper, a new control system architecture is proposed for AUV, ORCA (Oceanic Reinforced Cruising Agent) which is being currently developed by Korea Research Institute of Ships and Ocean Engineering (KRISO). The proposed architecture uses a hybrid architecture that combines a hierarchical architecture and a behavior based control architecture with an evaluator for coordinating between the architectures. This paper also proposed a sensor fusion structure based on the definition of 4 categories of sensors called grouping and 5-step data processing procedure. The development of the AUV, ORCA involving the system architecture, vehicle layout, and hardware configuration of on-board system are described.

  • PDF

Mobile Robot Navigation using a Dynamic Multi-sensor Fusion

  • Kim, San-Ju;Jin, Tae-Seok;Lee, Oh-Keol;Lee, Jang-Myung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.240-243
    • /
    • 2003
  • In this study, as the preliminary step far developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results from the simulations run.

  • PDF