• 제목/요약/키워드: sensor data fusion

검색결과 381건 처리시간 0.03초

Generalized IHS-Based Satellite Imagery Fusion Using Spectral Response Functions

  • Kim, Yong-Hyun;Eo, Yang-Dam;Kim, Youn-Soo;Kim, Yong-Il
    • ETRI Journal
    • /
    • 제33권4호
    • /
    • pp.497-505
    • /
    • 2011
  • Image fusion is a technical method to integrate the spatial details of the high-resolution panchromatic (HRP) image and the spectral information of low-resolution multispectral (LRM) images to produce high-resolution multispectral images. The most important point in image fusion is enhancing the spatial details of the HRP image and simultaneously maintaining the spectral information of the LRM images. This implies that the physical characteristics of a satellite sensor should be considered in the fusion process. Also, to fuse massive satellite images, the fusion method should have low computation costs. In this paper, we propose a fast and efficient satellite image fusion method. The proposed method uses the spectral response functions of a satellite sensor; thus, it rationally reflects the physical characteristics of the satellite sensor to the fused image. As a result, the proposed method provides high-quality fused images in terms of spectral and spatial evaluations. The experimental results of IKONOS images indicate that the proposed method outperforms the intensity-hue-saturation and wavelet-based methods.

비동기 이종 센서를 이용한 데이터 융합기반 근거리 표적 추적기법 (Short Range Target Tracking Based on Data Fusion Method Using Asynchronous Dissimilar Sensors)

  • 이의혁
    • 전자공학회논문지
    • /
    • 제49권9호
    • /
    • pp.335-343
    • /
    • 2012
  • 본 논문은 근거리에서 접근하는 표적에 대한 레이더와 열영상의 관측데이터를 기반으로 정보융합을 수행하여 표적을 추적하는 알고리즘을 기술하고 있다. 일반적으로 칼만필터를 이용한 추적 융합 방법은 동기화된 레이더 및 열영상의 데이터를 근간으로 하고 있으며, 비동기적으로 동작하는 실제 시스템에 적용하기에는 많은 제한사항을 가지고 있다. 제안된 알고리즘에서의 중점사항은 동기화되어 있지 않은 서로 다른 두 센서인 레이더와 열영상의 관측데이터가 입력되었을 때 레이더의 거리정보와 추적상태벡터를 이용하여 관측값의 시간차이를 보상하여 관측치 융합 후 추적을 수행하는 것이다. 제안된 알고리즘의 성능평가를 위해 기존의 궤적기반 정보융합방법 및 측정치 융합기법과 성능을 비교하여 제시한다.

SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템 (Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network)

  • 김동엽;이재민;전세웅
    • 로봇학회논문지
    • /
    • 제13권4호
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

쿼드로터 자세제어를 위한 센서융합 연구 (Study of Sensor Fusion for Attitude Control of a Quad-rotor)

  • 유동현;임대영;설남오;박종호;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제21권5호
    • /
    • pp.453-458
    • /
    • 2015
  • We presented a quad-rotor controlling algorithm design by using sensor fusion in this paper. The controller design technique was performed by a PD controller with a Kalman filter and compensation algorithm for increasing the stability and reliability of the quad-rotor attitude. In this paper, we propose an attitude estimation algorithm for quad-rotor based sensor fusion by using the Kalman filter. For this reason, firstly, we studied the platform configuration and principle of the quad-rotor. Secondly, the bias errors of a gyro sensor, acceleration and geomagnetic sensor are compensated. The measured values of each sensor are then fused via a Kalman filter. Finally, the performance of the proposed algorithm is evaluated through experimental data of attitude estimation. As a result, the proposed sensor fusion algorithm showed superior attitude estimation performance, and also proved that robust attitude estimation is possible even in disturbance.

Tracking of ARPA Radar Signals Based on UK-PDAF and Fusion with AIS Data

  • Chan Woo Han;Sung Wook Lee;Eun Seok Jin
    • 한국해양공학회지
    • /
    • 제37권1호
    • /
    • pp.38-48
    • /
    • 2023
  • To maintain the existing systems of ships and introduce autonomous operation technology, it is necessary to improve situational awareness through the sensor fusion of the automatic identification system (AIS) and automatic radar plotting aid (ARPA), which are installed sensors. This study proposes an algorithm for determining whether AIS and ARPA signals are sent to the same ship in real time. To minimize the number of errors caused by the time series and abnormal phenomena of heterogeneous signals, a tracking method based on the combination of the unscented Kalman filter and probabilistic data association filter is performed on ARPA radar signals, and a position prediction method is applied to AIS signals. Especially, the proposed algorithm determines whether the signal is for the same vessel by comparing motion-related components among data of heterogeneous signals to which the corresponding method is applied. Finally, a measurement test is conducted on a training ship. In this process, the proposed algorithm is validated using the AIS and ARPA signal data received by the voyage data recorder for the same ship. In addition, the proposed algorithm is verified by comparing the test results with those obtained from raw data. Therefore, it is recommended to use a sensor fusion algorithm that considers the characteristics of sensors to improve the situational awareness accuracy of existing ship systems.

센서 데이터 융합을 이용한 이동 로보트의 자세 추정 (The Posture Estimation of Mobile Robots Using Sensor Data Fusion Algorithm)

  • 이상룡;배준영
    • 대한기계학회논문집
    • /
    • 제16권11호
    • /
    • pp.2021-2032
    • /
    • 1992
  • 본 연구에서는 이동 로보트의 구동모터들의 회전수를 측정하는 두 개의 엔코 더와 로보트의 회전각 속도를 측정하는 자이로센서를 결합하여 주행중인 로보트의 자 세를 정확하게 추정할 수 있는 복수센서 시스템의 신호처리회로 및 알고리즘을 개발하 고 자이로센서의 측정방정식을 모델링하기 위하여 성능시험을 수행하였다. 그리고 확률이론을 유도된 측정방정식에 적용하여 본 복수센서 시스템의 출력 신호들을 효율 적으로 융합할 수 있는 센서데이터 융합알고리즘을 개발하여 사용된 측정센서들에 내 재하는 측정오차의 영향을 최소로 줄이고자 하였다. 제안된 융합알고리즘의 타당성 을 검증하기 위하여 주행실험을 수행하여 이동 로보트의 실제자세와 본 융합알고리즘 의 결과를 비교하였다.

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제11권1호
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Obstacle Avoidance of Mobile Robot Based on Behavior Hierarchy by Fuzzy Logic

  • Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제12권3호
    • /
    • pp.245-249
    • /
    • 2012
  • In this paper, we propose a navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using an ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process.

MULTI SENSOR DATA FUSION FOR IMPROVING PERFORMANCE AND RELIABILITY OF FULLY AUTOMATED MULTIPASS WELDING

  • Beattie, R.J.
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2002년도 Proceedings of the International Welding/Joining Conference-Korea
    • /
    • pp.336-341
    • /
    • 2002
  • Recent developments in sensor hardware and in advanced software have made it feasible to consider automating some of the most difficult welding operations. This paper describes some techniques used to automate successfully multipass submerged arc welding operations typically used in pressure vessel manufacture, shipbuilding, production of offshore structures and in pipe mills.

  • PDF

An Efficient Local Map Building Scheme based on Data Fusion via V2V Communications

  • Yoo, Seung-Ho;Choi, Yoon-Ho;Seo, Seung-Woo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제2권2호
    • /
    • pp.45-56
    • /
    • 2013
  • The precise identification of vehicle positions, known as the vehicle localization problem, is an important requirement for building intelligent vehicle ad-hoc networks (VANETs). To solve this problem, two categories of solutions are proposed: stand-alone and data fusion approaches. Compared to stand-alone approaches, which use single information including the global positioning system (GPS) and sensor-based navigation systems with differential corrections, data fusion approaches analyze the position information of several vehicles from GPS and sensor-based navigation systems, etc. Therefore, data fusion approaches show high accuracy. With the position information on a set of vehicles in the preprocessing stage, data fusion approaches is used to estimate the precise vehicular location in the local map building stage. This paper proposes an efficient local map building scheme, which increases the accuracy of the estimated vehicle positions via V2V communications. Even under the low ratio of vehicles with communication modules on the road, the proposed local map building scheme showed high accuracy when estimating the vehicle positions. From the experimental results based on the parameters of the practical vehicular environments, the accuracy of the proposed localization system approached the single lane-level.

  • PDF