• Title/Summary/Keyword: Body gesture

Search Result 100, Processing Time 0.027 seconds

Design and Implementation of Motion-based Interaction in AR Game (증강현실 게임에서의 동작 기반 상호작용 설계 및 구현)

  • Park, Jong-Seung;Jeon, Young-Jun
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.105-115
    • /
    • 2009
  • This article proposes a design and implementation methodology of a gesture-based interface for augmented reality games. The topic of gesture-based augmented reality games is a promising area in the immersive future games using human body motions. However, due to the instability of the current motion recognition technologies, most previous development processes have introduced many ad hoc methods to handle the shortcomings and, hence, the game architectures have become highly irregular and inefficient This article proposes an efficient development methodology for gesture-based augmented reality games through prototyping a table tennis game with a gesture interface. We also verify the applicability of the prototyping mechanism by implementing and demonstrating the augmented reality table tennis game. In the experiments, the implemented prototype has stably tracked real rackets to allow fast movements and interactions without delay.

  • PDF

Robot Gesture Reconition System based on PCA algorithm (PCA 알고리즘 기반의 로봇 제스처 인식 시스템)

  • Youk, Yui-Su;Kim, Seung-Young;Kim, Sung-Ho
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.400-402
    • /
    • 2008
  • The human-computer interaction technology (HCI) that has played an important role in the exchange of information between human being and computer belongs to a key field for information technology. Recently, control studies through which robots and control devices are controlled by using the movements of a person's body or hands without using conventional input devices such as keyboard and mouse, have been going only in diverse aspects, and their importance has been steadily increasing. This study is proposing a recognition method of user's gestures by applying measurements from an acceleration sensor to the PCA algorithm.

  • PDF

Robust Gesture Spotting and Recognition in Continuous Full Body Gesture (연속적인 전신 제스처에서 강인한 행동 적출 및 인식)

  • Park A.-V.;Shin H.-K.;Lee S.-W
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.898-900
    • /
    • 2005
  • 강인한 행동 인식을 하기 위해서는 연속적인 전신 제스처 입력에서부터 의미 있는 부분만을 분할하는 기술이 필요하다. 하지만 의미 없는 행동을 정의하고, 모델링 하기 어렵기 때문에, 연속적인 행동에서 중요한 행동만을 분할한다는 것은 어려운 문제이다. 본 논문에서는 연속적인 전신 행동의 입력으로부터 의미있는 부분을 분할하고, 동시에 인식하는 방법을 제안한다. 의미 없는 행동을 제거하고, 의미 있는 행동만을 적출하기 위해 garbage 모델을 제안한다. 이 garbage 모델에 의해 의미 있는 부분만 HMM의 입력으로 사용되어지며, 학습되어진 HMM 중에서 가장 높은 확률 값을 가지는 모델을 선택하여. 행동으로 인식한다. 제안된 방법은 20명의 3D motion capture data와 Principal Component Analysis를 이용하여 생성된 80개의 행동 데이터를 이용하여 평가하였으며, 의미 있는 행동과, 의미 없는 행동을 포함하는 연속적인 제스처 입력열에 대해 $98.3\%$의 인식률과 $94.8\%$의 적출률을 얻었다.

  • PDF

Analysis of movement in (2013) (<셜리에 관한 모든 것>(2013)에 나타난 움직임 분석)

  • Moon, Jae-Cheol;Lee, Jin-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.6
    • /
    • pp.43-52
    • /
    • 2020
  • This paper is a study of Gustav Deutsch's film (2013). The film transformed the painting of Edward Hopper into an homage film. So it gives the impression that the picture is moving. In this regard, it raises the issue of 'remediation' between film and pictures. In this study, We ask how (2013) dealt with the movement in turning Hopper's paintings into movies. To that end, To this end, we look at two aspects of movement: the actor's movement and the screen's movement. The concepts of "tableau vivant," Agamben's gesture and mediation were used in the process. The actor's movement in the film is not an act of making and developing events. It is a gesture that moves a person's body and expression itself. It is not a story-oriented acting, but a gesture that Giorgio Agamben said. Editing and camera movements are used while maintaining frontality. This suggests that the movement of the screen is the eye of the audience. At first glance, it embodies the voyeuristic gaze of the original work. However, But the audience isn't looking at the image unilaterally, as in mainstream fiction films, but they are also being seen by that image. Also, the camera's movement to take a closer look at the details of the screen shows the movement itself rather than the means to reveal the details. The 'vision of reality' in a film is made through movement. The film questions the vision of reality between painting and film, between words and images. The move is a means of mediating reality, but the film is regaining the "lost gesture" that Giorgio Agamben once said by revealing its mediated nature. This tells us that the vision of reality appears when it obscures its mediated nature.

AdaBoost-based Gesture Recognition Using Time Interval Window Applied Global and Local Feature Vectors with Mono Camera (모노 카메라 영상기반 시간 간격 윈도우를 이용한 광역 및 지역 특징 벡터 적용 AdaBoost기반 제스처 인식)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.3
    • /
    • pp.471-479
    • /
    • 2018
  • Recently, the spread of smart TV based Android iOS Set Top box has become common. This paper propose a new approach to control the TV using gestures away from the era of controlling the TV using remote control. In this paper, the AdaBoost algorithm is applied to gesture recognition by using a mono camera. First, we use Camshift-based Body tracking and estimation algorithm based on Gaussian background removal for body coordinate extraction. Using global and local feature vectors, we recognized gestures with speed change. By tracking the time interval trajectories of hand and wrist, the AdaBoost algorithm with CART algorithm is used to train and classify gestures. The principal component feature vector with high classification success rate is searched using CART algorithm. As a result, 24 optimal feature vectors were found, which showed lower error rate (3.73%) and higher accuracy rate (95.17%) than the existing algorithm.

Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction (휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

User Detection and Main Body Parts Estimation using Inaccurate Depth Information and 2D Motion Information (정밀하지 않은 깊이정보와 2D움직임 정보를 이용한 사용자 검출과 주요 신체부위 추정)

  • Lee, Jae-Won;Hong, Sung-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.611-624
    • /
    • 2012
  • 'Gesture' is the most intuitive means of communication except the voice. Therefore, there are many researches for method that controls computer using gesture input to replace the keyboard or mouse. In these researches, the method of user detection and main body parts estimation is one of the very important process. in this paper, we propose user objects detection and main body parts estimation method on inaccurate depth information for pose estimation. we present user detection method using 2D and 3D depth information, so this method robust to changes in lighting and noise and 2D signal processing 1D signals, so mainly suitable for real-time and using the previous object information, so more accurate and robust. Also, we present main body parts estimation method using 2D contour information, 3D depth information, and tracking. The result of an experiment, proposed user detection method is more robust than only using 2D information method and exactly detect object on inaccurate depth information. Also, proposed main body parts estimation method overcome the disadvantage that can't detect main body parts in occlusion area only using 2D contour information and sensitive to changes in illumination or environment using color information.

A real-time robust body-part tracking system for intelligent environment (지능형 환경을 위한 실시간 신체 부위 추적 시스템 -조명 및 복장 변화에 강인한 신체 부위 추적 시스템-)

  • Jung, Jin-Ki;Cho, Kyu-Sung;Choi, Jin;Yang, Hyun S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.411-417
    • /
    • 2009
  • We proposed a robust body part tracking system for intelligent environment that will not limit freedom of users. Unlike any previous gesture recognizer, we upgraded the generality of the system by creating the ability the ability to recognize details, such as, the ability to detect the difference between long sleeves and short sleeves. For the precise each body part tracking, we obtained the image of hands, head, and feet separately from a single camera, and when detecting each body part, we separately chose the appropriate feature for certain parts. Using a calibrated camera, we transferred 2D detected body parts into the 3D posture. In the experimentation, this system showed advanced hand tracking performance in real time(50fps).

  • PDF

The Design and Implementation of Virtual Studio

  • Sul, Chang-Whan;Wohn, Kwang-Yoen
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.83-87
    • /
    • 1996
  • A virtual reality system using video image is designed and implemented. A participant having 2{{{{ { 1} over { 2} }}}}DOF can interact with the computer-generated virtual object using her/his full body posture and gesture in the 3D virtual environment. The system extracts the necessary participant-related information by video-based sensing, and simulates the realistic interaction such as collision detection in the virtual environment. The resulting scene obtained by compositing video image of the participant and virtual environment is updated in near real time.

  • PDF

A Study on User Interface for Quiz Game Contents using Gesture Recognition (제스처인식을 이용한 퀴즈게임 콘텐츠의 사용자 인터페이스에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.13 no.1
    • /
    • pp.91-99
    • /
    • 2012
  • In this paper we introduce a quiz application program that digitizes the analogue quiz game. We digitize the quiz components such as quiz proceeding, participants recognition, problem presentation, volunteer recognition who raises his hand first, answer judgement, score addition, winner decision, etc, which are manually performed in the normal quiz game. For automation, we obtained the depth images from the kinect camera which comes into the spotlight recently, so that we located the quiz participants and recognized the user-friendly defined gestures. Analyzing the depth distribution, we detected and segmented the upper body parts and located the hands' areas. Also, we extracted hand features and designed the decision function that classified the hand pose into palm, fist or else, so that a participant can select the example that he wants among presented examples. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.