• 제목/요약/키워드: fusion of sensor information

검색결과 410건 처리시간 0.035초

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

분산된 센서들의 Registration 오차를 줄이기 위한 새로운 필터링 방법 (New Filtering Method for Reducing Registration Error of Distributed Sensors)

  • 김용식;이재훈;도현민;김봉근;타니카와 타미오;오바 코타로;이강;윤석헌
    • 로봇학회논문지
    • /
    • 제3권3호
    • /
    • pp.176-185
    • /
    • 2008
  • In this paper, new filtering method for sensor registration is provided to estimate and correct error of registration parameters in multiple sensor environments. Sensor registration is based on filtering method to estimate registration parameters in multiple sensor environments. Accuracy of sensor registration can increase performance of data fusion method selected. Due to various error sources, the sensor registration has registration errors recognized as multiple objects even though multiple sensors are tracking one object. In order to estimate the error parameter, new nonlinear information filtering method is developed using minimum mean square error estimation. Instead of linearization of nonlinear function like an extended Kalman filter, information estimation through unscented prediction is used. The proposed method enables to reduce estimation error without a computation of the Jacobian matrix in case that measurement dimension is large. A computer simulation is carried out to evaluate the proposed filtering method with an extended Kalman filter.

  • PDF

이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정 (Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment)

  • 진태석;이민중;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제13권5호
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

센서 융합을 이용한 이동 로봇의 물체 검출 방법 (Object Detection Method on Vision Robot using Sensor Fusion)

  • 김상훈
    • 정보처리학회논문지B
    • /
    • 제14B권4호
    • /
    • pp.249-254
    • /
    • 2007
  • 본 논문에서는 초음파 및 적외선 센서와 무선 카메라를 장착한 소형 이동 로봇의 물체 검출 방법을 제시한다. 전방 물체의 존재 여부를 판단하기 위해, 초음파 센서는 초음파 발생 신호의 귀환시간, 적외선 센서는 감지한 적외선 아날로그신호의 양, 카메라는 영상 데이터 중 물체의 특징 등을 추출하여 그 결과를 융합함으로써 물체의 유무 또는 이동 로봇과 물체와의 거리를 판단하여 로봇의 움직임을 제어하는데 사용한다. 초음파와 적외선 센서는 물체의 유무와 물체의 대략의 거리를 예측하는 1차 센서로 사용되며 거리 계산결과와 실제 거리 값과의 오차는 5%이내이다. 영상처리에 의해 2차의 섬세한 물체 검출 및 추적을 수행하여 최종적으로 센서 융합에 의한 물체 검출율을 개선하였다. 영상처리방법은 물체와 배경 및 유사잡음들과의 강인한 분리를 위하여 고유색상정보와 움직임 정보 등의 사전정보를 활용하였으며, 형태의 변화가 수반되는 경우에도 유연한 대처능력을 갖도록 하기 위해 시그니처를 이용한 영역분할 방법을 통해 모든 후보영역내의 물체의 존재를 확인하고 목표 물체영역만을 검출하였다. 세가지 센서에 의한 대상 물체 검출 결과의 합은 최종적인 검출을 결정하는데 확률적 근거로 활용되며 각 개별 센서를 사용한 경우보다 최소 7% 이상의 검출율이 개선되었다.

센서 퓨전을 이용한 시각 장애인 유도 로봇의 실내주행 연구 (A Study on the Indoor Navigation of Guiding Robot for the Visually Impaired Using Sensor Fusion)

  • 장철웅;정기호;염문진;심현민;홍영기;심재홍;이응혁
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.923-924
    • /
    • 2006
  • In this paper, we propose the sensor fusing method for the obstacle avoidance of guiding robot for the visually impaired In our system, we acquire obstacles distances information using ultrasonic sensors, and its width is acquired by image sensor. We also compute avoidance angle using are distance and width information gained by sensor. After the robot avoid the obstacle by computed angle, the robot returns to its original path using odometry. The robot consists of the SA1110-based controller, sensory part using sonar array and image sensor, and motion part using differential drive for climbing stairs. This system use the embedded linux for OS, and also is developed by the QT/Embedded for GUI.

  • PDF

Positional Tracking System Using Smartphone Sensor Information

  • Kim, Jung Yee
    • Journal of Multimedia Information System
    • /
    • 제6권4호
    • /
    • pp.265-270
    • /
    • 2019
  • The technology to locate an individual has enabled various services, its utilization has increased. There were constraints such as the use of separate expensive equipment or the installation of specific devices on a facility, with most of the location technology studies focusing on the accuracy of location verification. These constraints can result in accuracy within a few tens of centimeters, but they are not technology that can be applied to a user's location in real-time in daily life. Therefore, this paper aims to track the locations of smartphones only using the basic components of smartphones. Based on smartphone sensor data, localization accuracy that can be used for verification of the users' locations is aimed at. Accelerometers, Wifi radio maps, and GPS sensor information are utilized to implement it. In forging the radio map, signal maps were built at each vertex based on the graph data structure This approach reduces traditional map-building efforts at the offline phase. Accelerometer data were made to determine the user's moving status, and the collected sensor data were fused using particle filters. Experiments have shown that the average user's location error is about 3.7 meters, which makes it reasonable for providing location-based services in everyday life.

CALOS : 주행계 추정을 위한 카메라와 레이저 융합 (CALOS : Camera And Laser for Odometry Sensing)

  • 복윤수;황영배;권인소
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

Environment Adaptive Emergency Evacuation Route GUIDE through Digital Signage Systems

  • Lee, Dongwoo;Kim, Daehyun;Lee, Junghoon;Lee, Seungyoun;Hwang, Hyunsuk;Mariappan, Vinayagam;Lee, Minwoo;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제5권1호
    • /
    • pp.90-97
    • /
    • 2017
  • Nowadays, the most of commercial buildings are build-out with complex architecture and decorated with more complicated interiors of buildings so establishing intelligible escape routes becomes an important case of fire or other emergency in a limited time. The commercial buildings are already equipped with multiple exit signs and these exit signs may create confusion and leads the people into different directions under emergency. This can jeopardize the emergency situation into a chaotic state, especially in a complex layout buildings. There are many research focused on implementing different approached to improve the exit sign system with better visual navigating effects, such as the use of laser beams, the combination of audio and video cues, etc. However the digital signage system based emergency exit sign management is one of the best solution to guide people under emergency situations to escape. This research paper, propose an intelligent evacuation route GUIDE that uses the combination centralized Wireless Sensor Networks (WSN) and digital signage for people safety and avoids dangers from emergency conditions. This proposed system applies WSN to detect the environment condition in the building and uses an evacuation algorithm to estimate the safe route to escape using the sensor information and then activates the signage system to display the safe evacuation route instruction GUIDE according to the location the signage system is installed. This paper presented the prototype of the proposed signage system and execution time to find the route with future research directions. The proposed system provides a natural intelligent evacuation route interface for self or remote operation in facility management to efficiently GUIDE people to the safe exit under emergency conditions.

ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법 (Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion)

  • 정석우;이연성;이경택
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.466-468
    • /
    • 2022
  • 3차원 복원은 AR, XR, 메타버스 등에서 활용되고 있는 중요한 주제입니다. 3차원 재구성을 하기 위해서는 스테레오 카메라, ToF 센서 등을 이용해 깊이 지도를 구해야 합니다. 우리는 두가지 센서를 모두 상호보완적으로 이용하여 3차원 정보를 정밀하게 구하는 방법을 고안하였습니다. 우선 두 카메라의 캘리브레이션을 적용하여 색상 정보와 깊이 정보를 일치시킵니다. 그리고 두 센서로부터의 깊이 지도는 3차원 정합과 재투사 방법을 통하여 융합하였습니다. 융합된 3차원 복원 데이터는 RTC360을 이용해 구한 정밀 데이터와 비교 분석하였습니다. 평균 거리 오차 분석을 위해 상용프로그램 Geomagic Wrap을 활용하였습니다. 제안하는 방법을 구현하고 실공간 데이터를 이용하여 실험을 진행했습니다.

  • PDF

드론 기술을 이용한 부력 조형물의 자세 제어 (Posture control of buoyancy sculptures using drone technology)

  • 강진구
    • 디지털산업정보학회논문지
    • /
    • 제14권4호
    • /
    • pp.1-7
    • /
    • 2018
  • The floating sculptures in the form of ad-ballon commonly used ropes in order to hold on. Relatively air flow is much less indoor than outdoor. Users of buoyancy sculptures hope to be able to maintain their desired posture without being fixed. This study applied drone technology to buoyancy sculptures. The drones can be moved vertically and horizontally, and the posture can be maintained, so buoyancy sculptures are easy to apply. Therefore, we have studied the control system of buoyancy sculpture using drone technology. Also, a control system that can maintain the desired posture at a constant height was studied. The overall shape was a light fiber material and helium gas for zero buoyancy to support the sculpture. The system configuration was STM32F103CB from ARM. In addition, the gyro and acceleration, geomagnetic sensors and motors are composed of small and medium size BLDC motors. The scheduling of the control system in the configuration of the control device was carefully considered. Because the role of the whole component becomes very important. The communication between the components is divided into the sensor fusion and the interface communication with the whole controller. Each communication technology is designed to expand. This study was implemented to actively respond from the viewpoint of posture control using the drone technology.