• Title/Summary/Keyword: Kinect Interaction

Search Result 51, Processing Time 0.028 seconds

A Motion Capture and Mapping System: Kinect Based Human-Robot Interaction Platform (동작포착 및 매핑 시스템: Kinect 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8563-8567
    • /
    • 2015
  • We propose a human-robot interaction(HRI) platform based on motion capture and mapping. Platform consists of capture, processing/mapping, and action parts. A motion capture sensor, computer, and avatar and/or physical robots are selected as capture, processing/mapping, and action part(s), respectively. Case studies-an interactive presentation and LEGO robot car are presented to show the design and implementation process of Kinect based HRI platform.

A Controlled Study of Interactive Exhibit based on Gesture Image Recognition (제스처 영상 인식기반의 인터렉티브 전시용 제어기술 연구)

  • Cha, Jaesang;Kang, Joonsang;Rho, Jung-Kyu;Choi, Jungwon;Koo, Eunja
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.1-5
    • /
    • 2014
  • Recently, building is rapidly develop more intelligently because of the development of industries. And people seek such as comfort, efficiency, and convenience in office environment and the living environment. Also, people were able to use a variety of devices. Smart TV and smart phones were distributed widely so interaction between devices and human has been increase the interest. A various method study for interaction but there are some discomfort and limitations using controller for interaction. In this paper, a user could be easily interaction and control LED through using Kinect and gesture(hand gestures) without controller. we designed interface which is control LED using the joint information of gesture obtained from Kinect. A user could be individually controlled LED through gestures (hand movements) using the implementation of the interface. We expected developed interface would be useful in LED control and various fields.

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

Mobile Interaction Using Smartphones and Kinect in a Global Space (키넥트와 스마트폰을 활용한 공용 공간상에서 모바일 상호작용)

  • Kim, Min Seok;Lee, Jae Yeol
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.1
    • /
    • pp.100-107
    • /
    • 2014
  • This paper presents a co-located and mobile interaction technique using smartphones in a global space. To effectively detect the locations and orientations of smartphones, the proposed approach utilizes Kinect that captures RGB image as well as 3D depth information. Based on the locations and orientations of smartphones, the proposed approach can support direct, collaborative and private interactions with the global space. Thus, it can provide more effective mobile interactions for local space exploration and collaboration.

Interactive lens through smartphones for supporting level-of-detailed views in a public display

  • Kim, Minseok;Lee, Jae Yeol
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.2
    • /
    • pp.73-78
    • /
    • 2015
  • In this paper, we propose a new approach to providing interactive and collaborative lens among multi-users for supporting level-of-detailed views using smartphones in a public display. In order to provide smartphone-based lens capability, the locations of smartphones are effectively detected and tracked using Kinect, which provides RGB data and depth data (RGB-D). In particular, human skeleton information is extracted from the Kinect 3D depth data to calculate the smartphone location more efficiently and correctly with respect to the public display and to support head tracking for easy target selection and adaptive view generation. The suggested interactive and collaborative lens using smartphones not only can explore local spaces of the shared display but also can provide various kinds of activities such as LOD viewing and collaborative interaction. Implementation results are given to show the advantage and effectiveness of the proposed approach.

User Customizable Hit Action Recognition Method using Kinect (키넥트를 이용한 사용자 맞춤형 손동작 히트 인식 방법)

  • Choi, Yunyeon;Tang, Jiamei;Jang, Seungeun;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.557-564
    • /
    • 2015
  • There are many prior studies for more natural Human-Computer Interaction. Until now, the efforts is continued in order to recognize motions in various directions. In this paper, we suggest a user-specific recognition by hit detection method using Kinect camera and human proportion. This algorithm extracts the user-specific valid recognition rage after recognizing the user's body initially. And it corrects the difference in horizontal position between the user and Kinect, so that we can estimate a action of user by matching cursor to target using only one frame. Ensure that efficient hand recognition in the game to take advantage of this method of suggestion.

A Case Study on the Learner's Engaged Learning Experience in Kinect Game Based Learning (Kinect 게임 활용 수업에서 학습자의 참여적 학습 경험에 대한 사례 연구)

  • Ryoo, EunJin;Kang, Myunghee;Park, Juyeon
    • Journal of The Korean Association of Information Education
    • /
    • v.23 no.4
    • /
    • pp.363-374
    • /
    • 2019
  • Recently, there is an increasing interest in game based learning as a teaching method for digital native learners. This study set kinect game contributes to engaged learning as the competition and cooperation play (achievement goals, interaction), the digital game play (multisensory stimulation, fantasy and curiosity, chance, accurate feedback, control), and the body movement play (embodied cognition, presence). After performing classes using the motion recognition game developed for the elementary school history class, this study conducted semi structured interviews based on engaged learning elements of kinect game based learning for students who were successfully participating in learning. In the result, each element appeared to a successful learner. Based on these results, this study hopes to assist researchers as a basic evidence to introduce kinect game-based learning for engaged learning.

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition (Kinect 기반 손 모양 인식을 위한 손 영역 검출에 관한 연구)

  • Park, Hanhoon;Choi, Junyeong;Park, Jong-Il;Moon, Kwang-Seok
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.393-400
    • /
    • 2013
  • Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.

Development of a Tiled Display GOCI Observation Satellite Imagery Visualization System (타일드 디스플레이 천리안 해양관측 위성 영상 가시화 시스템 개발)

  • Park, Chan-sol;Lee, Kwan-ju;Kim, Nak-hoon;Lee, Sang-ho;Seo, Ki-young;Park, Kyoung Shin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.641-642
    • /
    • 2013
  • This research implemented Geostationary Ocean Color Imager (GOCI) observation satellite imagery visualization system on a large high-resolution tiled display. This system is designed to help users observe or analyze satellite imagery more effectively on the tiled display using multi-touch and Kinect motion gesture recognition interaction. We developed the multi-scale image loading and rendering technique for the massive amount of memory management and smooth rendering for GOCI satellite imagery on the tiled display. In this system, the unit of time corresponding to the selected date of the satellite images are sequentially displayed on the screen. Users can zoom-in, zoom-out, move the imagery and select some buttons to trigger functions using both multi-touch or Kinect gesture interaction.

  • PDF

Detection Accuracy Improvement of Hang Region using Kinect (키넥트를 이용한 손 영역 검출의 정확도 개선)

  • Kim, Heeae;Lee, Chang Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2727-2732
    • /
    • 2014
  • Recently, the researches of object tracking and recognition using Microsoft's Kinect are being actively studied. In this environment human hand detection and tracking is the most basic technique for human computer interaction. This paper proposes a method of improving the accuracy of the detected hand region's boundary in the cluttered background. To do this, we combine the hand detection results using the skin color with the extracted depth image from Kinect. From the experimental results, we show that the proposed method increase the accuracy of the hand region detection than the method of detecting a hand region with a depth image only. If the proposed method is applied to the sign language or gesture recognition system it is expected to contribute much to accuracy improvement.