• 제목/요약/키워드: Sensor fusion

검색결과 818건 처리시간 0.024초

심해무인잠수정 해미래의 고도정보 추정을 위한 다중센서융합 알고리즘 (Multiple Sensor Fusion Algorithm for the Altitude Estimation of Deep-Sea UUV, HEMIRE)

  • 김덕진;김기훈;이판묵;조성권;박연식
    • 한국정보통신학회논문지
    • /
    • 제12권7호
    • /
    • pp.1202-1208
    • /
    • 2008
  • 본 논문에서는 유삭식 심해무인잠수정인 해미래와 진수장치인 해누비로 이루어지는 심해무인잠수정 시스템의 항법 알고리즘에 사용되는 다중 센서 융합 기법에 대하여 소개하고 있다. 수중 위치 추적 시스템의 성능은 초단기선, 장기선, 고도계와 같은 수중 음향 센서의 성능에 의해 결정되는데 수중음향 신호는 다양한 형태의 노이즈를 가지고 있어 특별한 주의가 요망된다. 본 논문에서는 이동 관측창 개념을 이용한 실용적인 다중 센서 융합 알고리즘을 제안하였다. 해미래의 동해 실해역 시험을 통해 획득된 계측치에 본 알고리즘을 적용하여 그 성능을 고찰한 결과 우수한 성능을 보임을 알 수 있었다.

카메라와 라이다 센서 융합에 기반한 개선된 주차 공간 검출 시스템 (Parking Space Detection based on Camera and LIDAR Sensor Fusion)

  • 박규진;임규범;김민성;박재흥
    • 로봇학회논문지
    • /
    • 제14권3호
    • /
    • pp.170-178
    • /
    • 2019
  • This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.

스마트팩토리 실현을 위한 다중센서기반 모바일로봇의 위치 및 자세제어에 관한 연구 (A Study on Orientation and Position Control of Mobile Robot Based on Multi-Sensors Fusion for Implimentation of Smart FA)

  • 동근한;김희진;배호영;김상현;백영태;한성현
    • 한국산업융합학회 논문집
    • /
    • 제22권2호
    • /
    • pp.209-218
    • /
    • 2019
  • This study proposes a new approach to Control the Orientation and position based on obstacle avoidance technology by multi sensors fusion and autonomous travelling control of mobile robot system for implimentation of Smart FA. The important focus is to control mobile robot based on by the multiple sensor module for autonomous travelling and obstacle avoidance of proposed mobile robot system, and the multiple sensor module is consit with sonar sensors, psd sensors, color recognition sensors, and position recognition sensors. Especially, it is proposed two points for the real time implementation of autonomous travelling control of mobile robot in limited manufacturing environments. One is on the development of the travelling trajectory control algorithm which obtain accurate and fast in considering any constraints. such as uncertain nonlinear dynamic effects. The other is on the real time implementation of obstacle avoidance and autonomous travelling control of mobile robot based on multiple sensors. The reliability of this study has been illustrated by the computer simulation and experiments for autonomous travelling control and obstacle avoidance.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

소형 VTOL UAV 이착륙을 위한 지면과의 거리 추정 (Distance estimation from ground for small VTOL UAV landing)

  • 윤병민;김상원;조선호;박종국
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.59-61
    • /
    • 2004
  • For automatic landing of small VTOL UAV, it is necessary to calculate the distance from the UAV and the ground. The distance can be generally measured by a ultra-sonic sensor, but the ultra-sonic sensor has errors according to velocity of a sensor board. To compensate these errors, we proposed a sensor fusion method using a Kalman filter.

  • PDF

센서퓨젼 기술을 이용한 정밀조립작업 (Precise assembly task using sensor fusion technology)

  • 이종길;이범희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.287-292
    • /
    • 1993
  • We use three sensors such as a vision sensor, a proximity sensor, and a force/torque sensor fused by fuzzy logic in a peg-in-hole task. The vision and proximity sensors are usually used for gross motion control and the information is used here to position the peg around the hole. The force/torque sensor is used for fine motion control and the information is used to insert the peg into the hole precisely. Throughout the task, the information of all the three sensors is fused by a fuzzy logic controller. Some simulation results are also presented for verification.

  • PDF

Real Time Motion Processing for Autonomous Navigation

  • Kolodko, J.;Vlacic, L.
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권1호
    • /
    • pp.156-161
    • /
    • 2003
  • An overview of our approach to autonomous navigation is presented showing how motion information can be integrated into existing navigation schemes. Particular attention is given to our short range motion estimation scheme which utilises a number of unique assumptions regarding the nature of the visual environment allowing a direct fusion of visual and range information. Graduated non-convexity is used to solve the resulting non-convex minimisation problem. Experimental results show the advantages of our fusion technique.

도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합 (the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas)

  • 강준묵;강영미;이형석
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2004년도 춘계학술발표회논문집
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

Development of a system architecture for an advanced autonomous underwater vehicle, ORCA

  • Choi, Hyun-Taek;Lee, Pan-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1791-1796
    • /
    • 2004
  • Recently, great improvements have been made in developing autonomous underwater vehicles (AUVs) using stateof- the-art technologies for various kinds of sophisticated underwater missions. To meet increasing demands posed on AUVs, a powerful on-board computer system and an accurate sensor system with an well-organized control system architecture are needed. In this paper, a new control system architecture is proposed for AUV, ORCA (Oceanic Reinforced Cruising Agent) which is being currently developed by Korea Research Institute of Ships and Ocean Engineering (KRISO). The proposed architecture uses a hybrid architecture that combines a hierarchical architecture and a behavior based control architecture with an evaluator for coordinating between the architectures. This paper also proposed a sensor fusion structure based on the definition of 4 categories of sensors called grouping and 5-step data processing procedure. The development of the AUV, ORCA involving the system architecture, vehicle layout, and hardware configuration of on-board system are described.

  • PDF

Mobile Robot Navigation using a Dynamic Multi-sensor Fusion

  • Kim, San-Ju;Jin, Tae-Seok;Lee, Oh-Keol;Lee, Jang-Myung
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.240-243
    • /
    • 2003
  • In this study, as the preliminary step far developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results from the simulations run.

  • PDF