• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.022 seconds

A Data Fusion Method of Odometry Information and Distance Sensor for Effective Obstacle Avoidance of a Autonomous Mobile Robot (자율이동로봇의 효율적인 충돌회피를 위한 오도메트리 정보와 거리센서 데이터 융합기법)

  • Seo, Dong-Jin;Ko, Nak-Yong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.4
    • /
    • pp.686-691
    • /
    • 2008
  • This paper proposes the concept of "virtual sensor data" and its application for real time obstacle avoidance. The virtual sensor data is virtual distance which takes care of the movement of the obstacle as well as that of the robot. In practical application, the virtual sensor data is calculated from the odometry data and the range sensor data. The virtual sensor data can be used in all the methods which use distance data for collision avoidance. Since the virtual sensor data considers the movement of the robot and the obstacle, the methods utilizing the virtual sensor data results in more smooth and safer collision-free motion.

Sensor Fusion for Motion Capture System (모션 캡쳐 시스템을 위한 센서 퓨전)

  • Jeong, Il-Kwon;Park, ChanJong;Kim, Hyeong-Kyo;Wohn, KwangYun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.3
    • /
    • pp.9-15
    • /
    • 2000
  • We Propose a sensor fusion technique for motion capture system. In our system, two kinds of sensors are used for mutual assistance. Four magnetic sensors(markers) are attached on the upper arms and the back of the hands for assisting twelve optical sensors which are attached on the arms of a performer. The optical sensor information is not always complete because the optical markers can be hidden due to obstacles. In this case, magnetic sensor information is used to link discontinuous optical sensor information. We use a system identification techniques for modeling the relation between the sensors' signals. Dynamic systems are constructed from input-output data. We determine the best model from the set of candidate models using the canonical system identification techniques. Our approach is using a simple signal processing technique currently. In the future work, we will propose a new method using other signal processing techniques such as Wiener or Kalman filter.

  • PDF

Cooperative Spectrum Sensing for Cognitive Radio Systems with Energy Harvesting Capability (에너지 수집 기능이 있는 인지 무선 시스템의 협력 스펙트럼 센싱 기법)

  • Park, Sung-Soo;Lee, Seok-Won;Bang, Keuk-Joon;Hong, Dae-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.3
    • /
    • pp.8-13
    • /
    • 2012
  • In this paper, we investigate cooperative spectrum sensing scheme for sensor network-aided cognitive radio systems with energy harvesting capability. In the proposed model, each sensor node harvests ambient energy from environment such as solar, wind, mechanical vibration, or thermoelectric effect. We propose adaptive cooperative spectrum sensing scheme in which each sensor node adaptively carries out energy detection depending on the residual energy in its energy storage and then conveys the sensing result to the fusion center. From simulation results, we show that the proposed scheme minimizes the false alarm probability for given target detection probability by adjusting the number of samples for energy detector.

RESOURCE ORIENTED ARCHITECTURE FOR MUTIMEDIA SENSOR NETWORKS IWAIT2009

  • Iwatani, Hiroshi;Nakatsuka, Masayuki;Takayanagi, Yutaro;Katto, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.456-459
    • /
    • 2009
  • Sensor network has been a hot research topic for the past decade and has moved its phase into using multimedia sensors such as cameras and microphones [1]. Combining many types of sensor data will lead to more accurate and precise information of the environment. However, the use of sensor network data is still limited to closed circumstances. Thus, in this paper, we propose a web-service based framework to deploy multimedia sensor networks. In order to unify different types of sensor data and also to support heterogeneous client applications, we used ROA (Resource Oriented Architecture [2]).

  • PDF

Development of IoT based Real-Time Complex Sensor Board for Managing Air Quality in Buildings

  • Park, Taejoon;Cha, Jaesang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.75-82
    • /
    • 2018
  • Efforts to reduce damages from micro dust and harmful gases in life have been led by national or local governments, and information on air quality has been provided along with real-time weather forecast through TV and internet. It is not enough to provide information on the individual indoor space consumed. So in this paper, we propose a IoT-based Real-Time Air Quality Sensing Board Corresponding Fine Particle for Air Quality Management in Buildings. Proposed board is easy to install and can be placed in the right place. In the proposed board, the air quality (level of pollution level) in the indoor space (inside the building) is easy and it is possible to recognize the changed indoor air pollution situation and provide countermeasures. According to the advantages of proposed system, it is possible to provide useful information by linking information about the overall indoor space where at least one representative point is located. In this paper, we compare the performance of the proposed board with the existing air quality measurement equipment.

Abnormal Situation Detection Algorithm via Sensors Fusion from One Person Households

  • Kim, Da-Hyeon;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.111-118
    • /
    • 2022
  • In recent years, the number of single-person elderly households has increased, but when an emergency situation occurs inside the house in the case of single-person households, it is difficult to inform the outside world. Various smart home solutions have been proposed to detect emergency situations in single-person households, but it is difficult to use video media such as home CCTV, which has problems in the privacy area. Furthermore, if only a single sensor is used to analyze the abnormal situation of the elderly in the house, accurate situational analysis is limited due to the constraint of data amount. In this paper, therefore, we propose an algorithm of abnormal situation detection fusion inside the house by fusing 2DLiDAR, dust, and voice sensors, which are closely related to everyday life while protecting privacy, based on their correlations. Moreover, this paper proves the algorithm's reliability through data collected in a real-world environment. Adnormal situations that are detectable and undetectable by the proposed algorithm are presented. This study focuses on the detection of adnormal situations in the house and will be helpful in the lives of single-household users.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Performance Enhancement of Attitude Estimation using Adaptive Fuzzy-Kalman Filter (적응형 퍼지-칼만 필터를 이용한 자세추정 성능향상)

  • Kim, Su-Dae;Baek, Gyeong-Dong;Kim, Tae-Rim;Kim, Sung-Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2511-2520
    • /
    • 2011
  • This paper describes the parameter adjustment method of fuzzy membership function to improve the performance of multi-sensor fusion system using adaptive fuzzy-Kalman filter and cross-validation. The adaptive fuzzy-Kanlman filter has two input parameters, variation of accelerometer measurements and residual error of Kalman filter. The filter estimates system noise R and measurement noise Q, then changes the Kalman gain. To evaluate proposed adaptive fuzzy-Kalman filter, we make the two-axis AHRS(Attitude Heading Reference System) using fusion of an accelerometer and a gyro sensor. Then we verified its performance by comparing to NAV420CA-100 to be used in various fields of airborne, marine and land applications.

Marker Classification by Sensor Fusion for Hand Pose Tracking in HMD Environments using MLP (HMD 환경에서 사용자 손의 자세 추정을 위한 MLP 기반 마커 분류)

  • Vu, Luc Cong;Choi, Eun-Seok;You, Bum-Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.920-922
    • /
    • 2018
  • This paper describes a method to classify simple circular artificial markers on surfaces of a box on the back of hand to detect the pose of user's hand for VR/AR applications by using a Leap Motion camera and two IMU sensors. One IMU sensor is located in the box and the other IMU sensor is fixed with the camera. Multi-layer Perceptron (MLP) algorithm is adopted to classify artificial markers on each surface tracked by the camera using IMU sensor data. It is experimented successfully in real-time, 70Hz, under PC environments.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.