• 제목/요약/키워드: Data Fusion Algorithm

검색결과 299건 처리시간 0.038초

Landmark Detection Based on Sensor Fusion for Mobile Robot Navigation in a Varying Environment

  • Jin, Tae-Seok;Kim, Hyun-Sik;Kim, Jong-Wook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제10권4호
    • /
    • pp.281-286
    • /
    • 2010
  • We propose a space and time based sensor fusion method and a robust landmark detecting algorithm based on sensor fusion for mobile robot navigation. To fully utilize the information from the sensors, first, this paper proposes a new sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable an accurate measurement. Exploration of an unknown environment is an important task for the new generation of mobile robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. The newly proposed, STSF (Space and Time Sensor Fusion) scheme is applied to landmark recognition for mobile robot navigation in an unstructured environment as well as structured environment, and the experimental results demonstrate the performances of the landmark recognition.

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • 제11권1호
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

Visual Control of Mobile Robots Using Multisensor Fusion System

  • Kim, Jung-Ha;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.91.4-91
    • /
    • 2001
  • In this paper, a development of the sensor fusion algorithm for a visual control of mobile robot is presented. The output data from the visual sensor include a time-lag due to the image processing computation. The sampling rate of the visual sensor is considerably low so that it should be used with other sensors to control fast motion. The main purpose of this paper is to develop a method which constitutes a sensor fusion system to give the optimal state estimates. The proposed sensor fusion system combines the visual sensor and inertial sensor using a modified Kalman filter. A kind of multi-rate Kalman filter which treats the slow sampling rate ...

  • PDF

TEXTURE ANALYSIS, IMAGE FUSION AND KOMPSAT-1

  • Kressler, F.P.;Kim, Y.S.;Steinnocher, K.T.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.792-797
    • /
    • 2002
  • In the following paper two algorithms, suitable for the analysis of panchromatic data as provided by KOMPSAT-1 will be presented. One is a texture analysis which will be used to create a settlement mask based on the variations of gray values. The other is a fusion algorithm which allows the combination of high resolution panchromatic data with medium resolution multispectral data. The procedure developed for this purpose uses the spatial information present in the high resolution image to spatially enhance the low resolution image, while keeping the distortion of the multispectral information to a minimum. This makes it possible to use the fusion results for standard multispecatral classification routines. The procedures presented here can be automated to large extent, making them suitable for a standard processing routine of satellite data.

  • PDF

센서 데이터 융합을 이용한 이동 로보트의 자세 추정 (The Posture Estimation of Mobile Robots Using Sensor Data Fusion Algorithm)

  • 이상룡;배준영
    • 대한기계학회논문집
    • /
    • 제16권11호
    • /
    • pp.2021-2032
    • /
    • 1992
  • 본 연구에서는 이동 로보트의 구동모터들의 회전수를 측정하는 두 개의 엔코 더와 로보트의 회전각 속도를 측정하는 자이로센서를 결합하여 주행중인 로보트의 자 세를 정확하게 추정할 수 있는 복수센서 시스템의 신호처리회로 및 알고리즘을 개발하 고 자이로센서의 측정방정식을 모델링하기 위하여 성능시험을 수행하였다. 그리고 확률이론을 유도된 측정방정식에 적용하여 본 복수센서 시스템의 출력 신호들을 효율 적으로 융합할 수 있는 센서데이터 융합알고리즘을 개발하여 사용된 측정센서들에 내 재하는 측정오차의 영향을 최소로 줄이고자 하였다. 제안된 융합알고리즘의 타당성 을 검증하기 위하여 주행실험을 수행하여 이동 로보트의 실제자세와 본 융합알고리즘 의 결과를 비교하였다.

Analysis of the Increase of Matching Points for Accuracy Improvement in 3D Reconstruction Using Stereo CCTV Image Data

  • Moon, Kwang-il;Pyeon, MuWook;Eo, YangDam;Kim, JongHwa;Moon, Sujung
    • 한국측량학회지
    • /
    • 제35권2호
    • /
    • pp.75-80
    • /
    • 2017
  • Recently, there has been growing interest in spatial data that combines information and communication technology with smart cities. The high-precision LiDAR (Light Dectection and Ranging) equipment is mainly used to collect three-dimensional spatial data, and the acquired data is also used to model geographic features and to manage plant construction and cultural heritages which require precision. The LiDAR equipment can collect precise data, but also has limitations because they are expensive and take long time to collect data. On the other hand, in the field of computer vision, research is being conducted on the methods of acquiring image data and performing 3D reconstruction based on image data without expensive equipment. Thus, precise 3D spatial data can be constructed efficiently by collecting and processing image data using CCTVs which are installed as infrastructure facilities in smart cities. However, this method can have an accuracy problem compared to the existing equipment. In this study, experiments were conducted and the results were analyzed to increase the number of extracted matching points by applying the feature-based method and the area-based method in order to improve the precision of 3D spatial data built with image data acquired from stereo CCTVs. For techniques to extract matching points, SIFT algorithm and PATCH algorithm were used. If precise 3D reconstruction is possible using the image data from stereo CCTVs, it will be possible to collect 3D spatial data with low-cost equipment and to collect and build data in real time because image data can be easily acquired through the Web from smart-phones and drones.

MULTI-SENSOR DATA FUSION FOR FUTURE TELEMATICS APPLICATION

  • Kim, Seong-Baek;Lee, Seung-Yong;Choi, Ji-Hoon;Choi, Kyung-Ho;Jang, Byung-Tae
    • Journal of Astronomy and Space Sciences
    • /
    • 제20권4호
    • /
    • pp.359-364
    • /
    • 2003
  • In this paper, we present multi-sensor data fusion for telematics application. Successful telematics can be realized through the integration of navigation and spatial information. The well-determined acquisition of vehicle's position plays a vital role in application service. The development of GPS is used to provide the navigation data, but the performance is limited in areas where poor satellite visibility environment exists. Hence, multi-sensor fusion including IMU (Inertial Measurement Unit), GPS(Global Positioning System), and DMI (Distance Measurement Indicator) is required to provide the vehicle's position to service provider and driver behind the wheel. The multi-sensor fusion is implemented via algorithm based on Kalman filtering technique. Navigation accuracy can be enhanced using this filtering approach. For the verification of fusion approach, land vehicle test was performed and the results were discussed. Results showed that the horizontal position errors were suppressed around 1 meter level accuracy under simulated non-GPS availability environment. Under normal GPS environment, the horizontal position errors were under 40㎝ in curve trajectory and 27㎝ in linear trajectory, which are definitely depending on vehicular dynamics.

범주형 자료의 결측치 추정방법 성능 비교 (Comparing Accuracy of Imputation Methods for Categorical Incomplete Data)

  • 신형원;손소영
    • 응용통계연구
    • /
    • 제15권1호
    • /
    • pp.33-43
    • /
    • 2002
  • 범주형 데이터의 결측치 추정을 위하여 최빈 범주법, 로지스틱 회귀분석, 연관규칙과 같은 다양한 방법이 연구되어 왔다. 본 연구에서는 이러한 방법의 추정 값을 결합하는 신경망 융합과 투표융합 방법을 제안하고 이의 성능을 시뮬레이션을 이용하여 비교하였다. 실험에 사용된 데이터의 특성을 나타내는 인자로는 (1) 입출력 변수간의 연결함수, (2) 데이터의 크기, (3) 노이즈의 크기 (4) 결측치의 비율, (5) 결측발생 함수를 사용하였다. 분석결과는 다음과 같다. 데이터의 크기가 작고 결측 발생 비율이 높으면 최빈 범주법, 연관규칙, 신경망 융합의 성능이 높게 나타났으며 데이터의 크기가 작고 결측발생 확률이 결측이 안된 나머지 변수에 높은 의존관계가 있으면 로지스틱 회귀분석, 신경망 융합의 성능이 높게 나타났다. 데이터의 크기가 크고, 결측치의 비율이 낮으면서, 노이즈가 크고 결측발생 확률이 결측이 안된 나머지 변수에 높은 의존관계가 있으면 신경망 융합의 성능이 높게 나타났다.

효과적인 얼굴 인식을 위한 인식기 선택 (Classifier Selection for Efficient Face Recognition)

  • 남미영;이필규
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2005년도 춘계종합학술대회
    • /
    • pp.453-456
    • /
    • 2005
  • 본 논문에서는 얼굴의 속성에 따라 각각의 알고리즘의 인식 성능이 달라지는 점에 착안해서, 다양한 얼굴 데이터를 클러스터링한 후 가장 효과적인 알고리즘을 선택적으로 사용하여 인식 성능을 높이는 방법을 제안하였다. 인식기 융합 문제는 인식결과를 결정짓는 문제에서 많이 사용하는 방식이며, Kuncheva는 데이터를 기준을 두어 영역별로 구분한 후. 각 데이터 영역에 맞는 분류기가 어떠한 것인가를 찾는 방법을 제안하였다. 분류기 여러개를 선택하여 사용할 경우, 어떻게 결과를 융합할것 인가에 대한 문제는 제시하지 않고 있다. 단지. 각 영역에 대하여, 어떠한 분류기를 사용하는 것이 좋을 것인가에 대한 문제만을 해결한다. 어떠한 영역의 데이터는 여러개의 분류기를 적용해도 된다는 결론하에, 각 분류기가 유사한 성능을 나타내므로, 어떠한 분류기를 사용하든 무관하다는 방향으로 전개한다. 따라서 본 논문에서는 각 데이터 영역별로 어떠한 분류기가 좋을 것인지 판단하며, 각 분류기에서 나온 결과값들을 융합하는 방법에 대하여 제안한다.

  • PDF

다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선 (Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map)

  • 김시종;안광호;성창훈;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF