• 제목/요약/키워드: Sensor fusion

검색결과 815건 처리시간 0.026초

Future trends in multisensor integration and fusion

  • Luo, Ren-C.;Kay, Michael-G.;Lee, W.Gary
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.22-28
    • /
    • 1992
  • The need for intelligent systems that can operate in an unstructured, dynamic environment has created a growing demand for the use of multiple, distributed sensors. While most research in multisensor fusion has revolved around applications in object recognition-including military applications for automatic target recognition-developments in microsensor technology are encouraging more research in affordable, highly-redundant sensor networks. Three trends that are described at length are the increasing use of microsensors, the techniques that are used in the handling of partial or uncertain data, and the application of neural network techniques for sensor fusion.

  • PDF

퍼지추론기반 센서융합 이동로봇의 장애물 회피 주행기법 (Fuzzy Inference Based Collision Free Navigation of a Mobile Robot using Sensor Fusion)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제21권2호
    • /
    • pp.95-101
    • /
    • 2018
  • This paper presents a collision free mobile robot navigation based on the fuzzy inference fusion model in unkonown environments using multi-ultrasonic sensor. Six ultrasonic sensors are used for the collision avoidance approach where CCD camera sensors is used for the trajectory following approach. The fuzzy system is composed of three inputs which are the six distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and three cost functions for the robot's movement, direction, obstacle avoidance, and rotation. For the evaluation of the proposed algorithm, we performed real experiments with mobile robot with ultrasonic sensors. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

관성/고도 센서 융합을 위한 기계학습 기반 필터 파라미터 추정 (Machine Learning-Based Filter Parameter Estimation for Inertial/Altitude Sensor Fusion)

  • Hyeon-su Hwang;Hyo-jung Kim;Hak-tae Lee;Jong-han Kim
    • 한국항행학회논문지
    • /
    • 제27권6호
    • /
    • pp.884-887
    • /
    • 2023
  • Recently, research has been actively conducted to overcome the limitations of high-priced single sensors and reduce costs through the convergence of low-cost multi-variable sensors. This paper estimates state variables through asynchronous Kalman filters constructed using CVXPY and uses Cvxpylayers to compare and learn state variables estimated from CVXPY with true value data to estimate filter parameters of low-cost sensors fusion.

Simultaneous Localization and Mobile Robot Navigation using a Sensor Network

  • Jin Tae-Seok;Bashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.161-166
    • /
    • 2006
  • Localization of mobile agent within a sensing network is a fundamental requirement for many applications, using networked navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, This paper describes a networked sensor-based navigation method in an indoor environment for an autonomous mobile robot which can navigate and avoid obstacle. In this method, the self-localization of the robot is done with a model-based vision system using networked sensors, and nonstop navigation is realized by a Kalman filter-based STSF(Space and Time Sensor Fusion) method. Stationary obstacles and moving obstacles are avoided with networked sensor data such as CCD camera and sonar ring. We will report on experiments in a hallway using the Pioneer-DX robot. In addition to that, the localization has inevitable uncertainties in the features and in the robot position estimation. Kalman filter scheme is used for the estimation of the mobile robot localization. And Extensive experiments with a robot and a sensor network confirm the validity of the approach.

순차적 칼만 필터를 적용한 다중센서 위치추정 알고리즘 실험적 검증 (Experimental Verification of Multi-Sensor Geolocation Algorithm using Sequential Kalman Filter)

  • 이성민;김영주;방효충
    • 제어로봇시스템학회논문지
    • /
    • 제21권1호
    • /
    • pp.7-13
    • /
    • 2015
  • Unmanned air vehicles (UAVs) are getting popular not only as a private usage for the aerial photograph but military usage for the surveillance, reconnaissance and supply missions. For an UAV to successfully achieve these kind of missions, geolocation (localization) must be implied to track an interested target or fly by reference. In this research, we adopted multi-sensor fusion (MSF) algorithm to increase the accuracy of the geolocation and verified the algorithm using two multicopter UAVs. One UAV is equipped with an optical camera, and another UAV is equipped with an optical camera and a laser range finder. Throughout the experiment, we have obtained measurements about a fixed ground target and estimated the target position by a series of coordinate transformations and sequential Kalman filter. The result showed that the MSF has better performance in estimating target location than the case of using single sensor. Moreover, the experimental result implied that multi-sensor geolocation algorithm is able to have further improvements in localization accuracy and feasibility of other complicated applications such as moving target tracking and multiple target tracking.

센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식 (A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System)

  • 조형기;조해민;이성원;김은태
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

Local Minimum Free Motion Planning for Mobile Robots within Dynamic Environmetns

  • Choi, Jong-Suk;Kim, Mun-Sang;Lee, Chong-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1921-1926
    • /
    • 2003
  • We build a local minimum free motion planning for mobile robots considering dynamic environments by simple sensor fusion assuming that there are unknown obstacles which can be detected only partially at a time by proximity sensors and can be cleaned up or moved slowly (dynamic environments). Potential field is used as a basic platform for the motion planning. To clear local minimum problem, the partial information on the obstacles should be memorized and integrated effectively. Sets of linked line segments (SLLS) are proposed as the integration method. Then robot's target point is replaced by virtual target considering the integrated sensing information. As for the main proximity sensors, we use laser slit emission and simple web camera since the system gives more continuous data information. Also, we use ultrasonic sensors as the auxiliary sensors for simple sensor fusion considering the advantages in that they give exact information about the presence of any obstacle within certain range. By using this sensor fusion, the dynamic environments can be dealt easily. The performance of our algorithm is validated via simulations and experiments.

  • PDF

영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발 (Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition)

  • 송희준;이선구;강태구;김동원;서삼준;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

다중센서융합 기반의 심해무인잠수정 정밀수중항법 구현 (Implementation of Deep-sea UUV Precise Underwater Navigation based on Multiple Sensor Fusion)

  • 김기훈;최현택;이종무;김시문;이판묵;조성권
    • 한국해양공학회지
    • /
    • 제24권3호
    • /
    • pp.46-51
    • /
    • 2010
  • This paper describes the implementation of a precise underwater navigation solution using a multi-sensor fusion technique based on USBL, DVL, and IMU measurements. To implement this precise underwater navigation solution, three strategies are chosen. The first involves heading alignment angle identification to enhance the performance of a standalone dead-reckoning algorithm. In the second, the absolute position is found quickly to prevent the accumulation of integration error. The third one is the introduction of an effective outlier rejection algorithm. The performance of the developed algorithm was verified with experimental data acquired by the deep-sea ROV, Hemire, in the East-sea during a survey of a methane gas seepage area at a 1,500 m depth.

센서 융합 기반 정밀 측위를 위한 노면 표시 검출 (Road Surface Marking Detection for Sensor Fusion-based Positioning System)

  • 김동석;정호기
    • 한국자동차공학회논문집
    • /
    • 제22권7호
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.