• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.022 seconds

Sensor Fusion-Based Semantic Map Building (센서융합을 통한 시맨틱 지도의 작성)

  • Park, Joong-Tae;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.

Classification of Fused SAR/EO Images Using Transformation of Fusion Classification Class Label

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.671-682
    • /
    • 2012
  • Strong backscattering features from high-resolution Synthetic Aperture Rader (SAR) image provide useful information to analyze earth surface characteristics such as man-made objects in urban areas. The SAR image has, however, some limitations on description of detail information in urban areas compared to optical images. In this paper, we propose a new classification method using a fused SAR and Electro-Optical (EO) image, which provides more informative classification result than that of a single-sensor SAR image classification. The experimental results showed that the proposed method achieved successful results in combination of the SAR image classification and EO image characteristics.

Bayesian Statistical Modeling of System Energy Saving Effectiveness for MAC Protocols of Wireless Sensor Networks: The Case of Non-Informative Prior Knowledge

  • Kim, Myong-Hee;Park, Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.6
    • /
    • pp.890-900
    • /
    • 2010
  • The Bayesian networks methods provide an efficient tool for performing information fusion and decision making under conditions of uncertainty. This paper proposes Bayes estimators for the system effectiveness in energy saving of the wireless sensor networks by use of the Bayesian method under the non-informative prior knowledge about means of active and sleep times based on time frames of sensor nodes in a wireless sensor network. And then, we conduct a case study on some Bayesian estimation models for the system energy saving effectiveness of a wireless sensor network, and evaluate and compare the performance of proposed Bayesian estimates of the system effectiveness in energy saving of the wireless sensor network. In the case study, we have recognized that the proposed Bayesian system energy saving effectiveness estimators are excellent to adapt in evaluation of energy efficiency using non-informative prior knowledge from previous experience with robustness according to given values of parameters.

Specialized Product-Line Development Methodology for Developing the Embedded System

  • Hong Ki-Sam;Yoon Hee-Byung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.268-273
    • /
    • 2005
  • We propose the specialized product-line development methodology for developing the embedded system of an MSDFS (Multi Sensor Data Fusion System : called MSDFS). The product-line methodology provides a simultaneous design between software and hardware, high level reusability. However this is insufficient in requirement analysis stage due to be focused on software architecture, detailed design and code. Thus we apply the business model based on IDEF0 technique to traditional methodology. In this paper, we describe the processes of developing Core-Asset, which are requirement analysis, feature modeling, validation. The proposed model gives the efficient result for eliciting features, and ensures the high level reusability of modules performing on embedded system.

Fuzzy Based Mobile Robot Control with GUI Environment (GUI환경을 갖는 퍼지기반 이동로봇제어)

  • Hong, Seon-Hack
    • 전자공학회논문지 IE
    • /
    • v.43 no.4
    • /
    • pp.128-135
    • /
    • 2006
  • This paper proposes the control method of fuzzy based sensor fusion by using the self localization of environment, position data by dead reckoning of the encoder and world map from sonic sensors. The proposed fuzzy based sensor fusion system recognizes the object and extracts features such as edge, distance and patterns for generating the world map and self localization. Therefore, this paper has developed fuzzy based control of mobile robot with experimentations in a corridor environment.

Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments (키넥트 센서를 이용한 동적 환경에서의 효율적인 이동로봇 반응경로계획 기법)

  • Tuvshinjargal, Doopalam;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.6
    • /
    • pp.549-559
    • /
    • 2015
  • In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.

Sensor Fusion for Seamless Localization using Mobile Device Data (센서 융합 기반의 실내외 연속 위치 인식)

  • Kim, Jung-yee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.10
    • /
    • pp.1994-2000
    • /
    • 2016
  • Technology that can determine the location of individuals is required in a variety of applications such as location based control, a personalized advertising. Missing-child prevention and support for field trips, and applications such as push events based on the user's location is endless. In particular, the technology that can determine the location without interruption in the indoor and outdoor spaces have been studied a lot recently. Because emphasizing on accuracy of the positioning, many conventional research have constraints such as using of additional sensing devices or special mounting devices. The algorithm proposed in this paper has the purpose of performing the positioning only with standard equipment of the smart phone that has the most users. In this paper, sensor Fusion with GPS, WiFi Radio Map, Accelerometer sensor and Particle Filter algorithm is designed and implemented. Experimental results of this algorithm shows superior performance than the other compared algorithm. This could confirm the possibility of using proposed algorithm on actual environment.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Design of a Situation-Awareness Processing System with Autonomic Management based on Multi-Sensor Data Fusion (다중센서 데이터 융합 기반의 자율 관리 능력을 갖는 상황인식처리 시스템의 설계)

  • Young-Gyun Kim;Chang-Won Hyun;Jang Hun Oh;Hyo-Chul Ahn;Young-Soo Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.913-916
    • /
    • 2008
  • 다중 센서 데이터 융합(Multi-Sensor Data Fusion)에 기반하여 자율관리 기능을 갖는 상황인식시스템에 대해 연구하였다. 다양한 형태의 센서들이 대규모의 네트워크로 연결된 환경에서 센서로부터 실시간으로 입력되는 데이터들을 융합하여 상황인식처리를 수행하는 시스템으로 노드에 설치된 소프트웨어 콤포넌트의 이상 유무를 자동 감지하고 치료하는 자율관리(Autonomic management) 기능을 갖는다. 제안한 시스템은 유비쿼터스 및 국방 무기체계의 감시·정찰, 지능형 자율 로봇, 지능형 자동차 등 다양한 상황인식 시스템에 적용가능하다.

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.