• Title/Summary/Keyword: Multi-Modal Sensor Fusion

Search Result 7, Processing Time 0.027 seconds

Development of Multi-Sensor Station for u-Surveillance to Collaboration-Based Context Awareness (협업기반 상황인지를 위한 u-Surveillance 다중센서 스테이션 개발)

  • Yoo, Joon-Hyuk;Kim, Hie-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.780-786
    • /
    • 2012
  • Surveillance has become one of promising application areas of wireless sensor networks which allow for pervasive monitoring of concerned environmental phenomena by facilitating context awareness through sensor fusion. Existing systems that depend on a postmortem context analysis of sensor data on a centralized server expose several shortcomings, including a single point of failure, wasteful energy consumption due to unnecessary data transfer as well as deficiency of scalability. As an opposite direction, this paper proposes an energy-efficient distributed context-aware surveillance in which sensor nodes in the wireless sensor network collaborate with neighbors in a distributed manner to analyze and aware surrounding context. We design and implement multi-modal sensor stations for use as sensor nodes in our wireless sensor network implementing our distributed context awareness. This paper presents an initial experimental performance result of our proposed system. Results show that multi-modal sensor performance of our sensor station, a key enabling factor for distributed context awareness, is comparable to each independent sensor setting. They also show that its initial performance of context-awareness is satisfactory for a set of introductory surveillance scenarios in the current interim stage of our ongoing research.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Emotion Recognition Algorithm Based on Minimum Classification Error incorporating Multi-modal System (최소 분류 오차 기법과 멀티 모달 시스템을 이용한 감정 인식 알고리즘)

  • Lee, Kye-Hwan;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.76-81
    • /
    • 2009
  • We propose an effective emotion recognition algorithm based on the minimum classification error (MCE) incorporating multi-modal system The emotion recognition is performed based on a Gaussian mixture model (GMM) based on MCE method employing on log-likelihood. In particular, the reposed technique is based on the fusion of feature vectors based on voice signal and galvanic skin response (GSR) from the body sensor. The experimental results indicate that performance of the proposal approach based on MCE incorporating the multi-modal system outperforms the conventional approach.

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 운전자의 감정 및 주의력 인식 기술 개발)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.754-761
    • /
    • 2008
  • As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Dual Foot-PDR System Considering Lateral Position Error Characteristics

  • Lee, Jae Hong;Cho, Seong Yun;Park, Chan Gook
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.1
    • /
    • pp.35-44
    • /
    • 2022
  • In this paper, a dual foot (DF)-PDR system is proposed for the fusion of integration (IA)-based PDR systems independently applied on both shoes. The horizontal positions of the two shoes estimated from each PDR system are fused based on a particle filter. The proposed method bounds the position error even if the walking time increases without an additional sensor. The distribution of particles is a non-Gaussian distribution to express the lateral error due to systematic drift. Assuming that the shoe position is the pedestrian position, the multi-modal position distribution can be fused into one using the Gaussian sum. The fused pedestrian position is used as a measurement of each particle filter so that the position error is corrected. As a result, experimental results show that position of pedestrians can be effectively estimated by using only the inertial sensors attached to both shoes.

Ontology-Based Dynamic Context Management and Spatio-Temporal Reasoning for Intelligent Service Robots (지능형 서비스 로봇을 위한 온톨로지 기반의 동적 상황 관리 및 시-공간 추론)

  • Kim, Jonghoon;Lee, Seokjun;Kim, Dongha;Kim, Incheol
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1365-1375
    • /
    • 2016
  • One of the most important capabilities for autonomous service robots working in living environments is to recognize and understand the correct context in dynamically changing environment. To generate high-level context knowledge for decision-making from multiple sensory data streams, many technical problems such as multi-modal sensory data fusion, uncertainty handling, symbolic knowledge grounding, time dependency, dynamics, and time-constrained spatio-temporal reasoning should be solved. Considering these problems, this paper proposes an effective dynamic context management and spatio-temporal reasoning method for intelligent service robots. In order to guarantee efficient context management and reasoning, our algorithm was designed to generate low-level context knowledge reactively for every input sensory or perception data, while postponing high-level context knowledge generation until it was demanded by the decision-making module. When high-level context knowledge is demanded, it is derived through backward spatio-temporal reasoning. In experiments with Turtlebot using Kinect visual sensor, the dynamic context management and spatio-temporal reasoning system based on the proposed method showed high performance.