• Title/Summary/Keyword: 휴먼 동작 인식

Search Result 16, Processing Time 0.024 seconds

Dynamic Bayesian Network-Based Gait Analysis (동적 베이스망 기반의 걸음걸이 분석)

  • Kim, Chan-Young;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.354-362
    • /
    • 2010
  • This paper proposes a new method for a hierarchical analysis of human gait by dividing the motion into gait direction and gait posture using the tool of dynamic Bayesian network. Based on Factorial HMM (FHMM), which is a type of DBN, we design the Gait Motion Decoder (GMD) in a circular architecture of state space, which fits nicely to human walking behavior. Most previous studies focused on human identification and were limited in certain viewing angles and forwent modeling of the walking action. But this work makes an explicit and separate modeling of pedestrian pose and posture to recognize gait direction and detect orientation change. Experimental results showed 96.5% in pose identification. The work is among the first efforts to analyze gait motions into gait pose and gait posture, and it could be applied to a broad class of human activities in a number of situations.

Human-Computer Interface using sEMG according to the Number of Electrodes (전극 개수에 따른 근전도 기반 휴먼-컴퓨터 인터페이스의 정확도에 대한 연구)

  • Lee, Seulbi;Chee, Youngjoon
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.21-26
    • /
    • 2015
  • NUI (Natural User Interface) system interprets the user's natural movement or the signals from human body to the machine. sEMG (surface electromyogram) can be observed when there is any effort in muscle even without actual movement, which is impossible with camera and accelerometer based NUI system. In sEMG based movement recognition system, the minimal number of electrodes is preferred to minimize the inconvenience. We analyzed the decrease in recognition accuracy as decreasing the number of electrodes. For the four kinds of movement intention without movement, extension (up), flexion (down), abduction (right), and adduction (left), the multilayer perceptron classifier was used with the features of RMS (Root Mean Square) from sEMG. The classification accuracy was 91.9% in four channels, 87.0% in three channels, and 78.9% in two channels. To increase the accuracy in two channels of sEMG, RMSs from previous time epoch (50-200 ms) were used in addition. With the RMSs from 150 ms, the accuracy was increased from 78.9% to 83.6%. The decrease in accuracy with minimal number of electrodes could be compensated partly by utilizing more features in previous RMSs.

Object Detection Using Predefined Gesture and Tracking (약속된 제스처를 이용한 객체 인식 및 추적)

  • Bae, Dae-Hee;Yi, Joon-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.43-53
    • /
    • 2012
  • In the this paper, a gesture-based user interface based on object detection using predefined gesture and the tracking of the detected object is proposed. For object detection, moving objects in a frame are computed by comparing multiple previous frames and predefined gesture is used to detect the target object among those moving objects. Any object with the predefined gesture can be used to control. We also propose an object tracking algorithm, namely density based meanshift algorithm, that uses color distribution of the target objects. The proposed object tracking algorithm tracks a target object crossing the background with a similar color more accurately than existing techniques. Experimental results show that the proposed object detection and tracking algorithms achieve higher detection capability with less computational complexity.

Smart HCI Based on the Informations Fusion of Biosignal and Vision (생체 신호와 비전 정보의 융합을 통한 스마트 휴먼-컴퓨터 인터페이스)

  • Kang, Hee-Su;Shin, Hyun-Chool
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.47-54
    • /
    • 2010
  • We propose a smart human-computer interface replacing conventional mouse interface. The interface is able to control cursor and command action with only hand performing without object. Four finger motions(left click, right click, hold, drag) for command action are enough to express all mouse function. Also we materialize cursor movement control using image processing. The measure what we use for inference is entropy of EMG signal, gaussian modeling and maximum likelihood estimation. In image processing for cursor control, we use color recognition to get the center point of finger tip from marker, and map the point onto cursor. Accuracy of finger movement inference is over 95% and cursor control works naturally without delay. we materialize whole system to check its performance and utility.

Design and Construction of 3D Gesture Database for Analyzing Human Behaviors (휴먼 행동 분석을 위한 3차원 제스처 데이터베이스의 설계 및 구축)

  • Roh M.-C.;Hwang B.-W.;Kim S.;Shin H.-K.;Park A-Y.;Lee S.-W.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.895-897
    • /
    • 2005
  • 인간 행동 분석은 컴퓨터 비전 및 패턴인식 분야에서 활발하게 연구가 이루어지는 분야이다. 이러한 행동 분석하고 평가하기 위해서는 다양한 환경과 종류의 제스쳐를 포함하고 있는 데이터베이스의 구축이 필수적이다. 본 논문에서는 총 40명의 사람에 대하여 일상생활에서 일어날 수 있는 14개의 정상 제스처, 위급한 상황에서 발생할 수 있는 10개의 비정상 제스처 그리고 30개의 명령형 제스처를 수집한 KU(Korea University) 제스처 데이터베이스를 소개한다. 각각의 제스처는 스테레오 카메라를 통해 얻어진 2차원 제스처 동영상, 3차원 동작 카메라를 통해 얻어진 3차원 모델의 좌표 정보 그리고 2차원 실루엣 동영상을 포함하고 있다.

  • PDF

A Study on Smart Touch Projector System Technology Using Infrared (IR) Imaging Sensor (적외선 영상센서를 이용한 스마트 터치 프로젝터 시스템 기술 연구)

  • Lee, Kuk-Seon;Oh, Sang-Heon;Jeon, Kuk-Hui;Kang, Seong-Soo;Ryu, Dong-Hee;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.7
    • /
    • pp.870-878
    • /
    • 2012
  • Recently, very rapid development of computer and sensor technologies induces various kinds of user interface (UI) technologies based on user experience (UX). In this study, we investigate and develop a smart touch projector system technology on the basis of IR sensor and image processing. In the proposed system, a user can control computer by understanding the control events based on gesture of IR pen as an input device. In the IR image, we extract the movement (or gesture) of the devised pen and track it for recognizing gesture pattern. Also, to correct the error between the coordinate of input image sensor and display device (projector), we propose a coordinate correction algorithm to improve the accuracy of operation. Through this system technology as the next generation human-computer interaction, we can control the events of the equipped computer on the projected image screen without manipulating the computer directly.