• Title/Summary/Keyword: kinect camera

Search Result 106, Processing Time 0.024 seconds

Comparative Evaluation of Exercise Effects of Motion-based Sports Game (체감형 스포츠 게임의 운동 효과 비교 평가)

  • Boo, Jae Hui;An, Ji Hyeon;Kim, Jeong Hyeon;Kim, Dong Keun;Park, Kyoung Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.403-411
    • /
    • 2022
  • A motion-based sports game uses a motion sensor or a camera to exercise based on body movements, and it is possible to obtain exercise effects such as improving physical strength while enjoying the game. In prior works, various studies such as usability evaluation has been conducted on motion-based sports games. However, there has been no discussion about how the exercise effect is exerted on users when experiencing motion-based sports games as individual or team play. This study compared the user's exercise effects by analyzing the user's ECG (Electrocardiogram) sensor and the Kinect sensor's skeletal information using Nintendo Switch game that is played individually and as a team. In this paper, the experimental design and method, the quantitative measurement results based on ECG and Kinect, and the results of the post-test subjective measurement are discussed.

Joint Range Measurement and Correction using Kinect Camera (키넥트 카메라를 이용한 관절 가동 범위 측정과 보정)

  • Jeong, Juheon;Yoon, Myeongsuk;Kim, Sangjoon;Park, Gooman
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.63-66
    • /
    • 2019
  • 가상현실과 증강현실의 대중화로 사람의 동작을 실시간 3D 애니메이션으로 구현하는 연구가 활발히 진행 중이다. 특히 Microsoft에서 키넥트 (Kinect)를 개발함에 따라 저렴한 가격에 부가적인 장치 필요 없이 간단한 조작만으로도 3D 모션 정보 취득이 가능해졌다. 하지만 키넥트 카메라는 마커 기반 모션 캡쳐 시스템에 비해 관절 정보의 추정 성능이 뒤떨어져 낮은 정확도를 보이는 단점을 지니고 있다. 이에 본 논문에서는 키넥트 카메라를 이용해 사람의 관절 정보를 취득하고 이것에 관절 가동 범위 (Range of Motion, ROM)를 적용하여 비정상적인 동작을 보정하는 시스템을 제안한다. ROM을 구하는 방법으로는 수행자가 모든 관절에 대해 회전 운동을 수행한 뒤 관절들의 회전 운동 정보를 취득, 분석하여 정상적인 ROM을 설정하고 실험으로부터 사람의 동작이 개선되는 것을 확인하였다.

  • PDF

Dynamic Gesture Recognition using SVM and its Application to an Interactive Storybook (SVM을 이용한 동적 동작인식: 체감형 동화에 적용)

  • Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.64-72
    • /
    • 2013
  • This paper proposes a dynamic gesture recognition algorithm using SVM(Support Vector Machine) which is suitable for multi-dimension classification. First of all, the proposed algorithm locates the beginning and end of the gestures on the video frames at the Kinect camera, spots meaningful gesture frames, and normalizes the number of frames. Then, for gesture recognition, the algorithm extracts gesture features using body parts' positions and relations among the parts based on the human model from the normalized frames. C-SVM for each dynamic gesture is trained using training data which consists of positive data and negative data. The final gesture is chosen with the largest value of C-SVM values. The proposed gesture recognition algorithm can be applied to the interactive storybook as gesture interface.

Use of Mini-maps for Detection and Visualization of Surrounding Risk Factors of Mobile Virtual Reality (미니맵을 사용한 모바일 VR 사용자 주변 위험요소 시각화 연구)

  • Kim, Jin;Park, Jun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.49-56
    • /
    • 2016
  • Mobile Virtual Reality Head Mount Displays such as Google Cardboard and Samsung Gear VR are being released, as well as PC-based VR HMDs such as Oculus Rift and HTC Vive. However, when the user wears HMD, it hides the external view of the user. Therefore, it may happen that the user is struck by the surrounding objects such as furniture, and there is no definite solution to this problem. In this paper, we propose a method to reduce the risk of injuries by visualizing the location and information of obstacles scanned by using a RGB-D camera.

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

Interactive Typography System using Combined Corner and Contour Detection

  • Lim, Sooyeon;Kim, Sangwook
    • International Journal of Contents
    • /
    • v.13 no.1
    • /
    • pp.68-75
    • /
    • 2017
  • Interactive Typography is a process where a user communicates by interacting with text and a moving factor. This research covers interactive typography using real-time response to a user's gesture. In order to form a language-independent system, preprocessing of entered text data presents image data. This preprocessing is followed by recognizing the image data and the setting interaction points. This is done using computer vision technology such as the Harris corner detector and contour detection. User interaction is achieved using skeleton information tracked by a depth camera. By synchronizing the user's skeleton information acquired by Kinect (a depth camera,) and the typography components (interaction points), all user gestures are linked with the typography in real time. An experiment was conducted, in both English and Korean, where users showed an 81% satisfaction level using an interactive typography system where text components showed discrete movements in accordance with the users' gestures. Through this experiment, it was possible to ascertain that sensibility varied depending on the size and the speed of the text and interactive alteration. The results show that interactive typography can potentially be an accurate communication tool, and not merely a uniform text transmission system.

Design of Interactive Teleprompter (인터렉티브 텔레프롬프터의 설계)

  • Park, Yuni;Park, Taejung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.3
    • /
    • pp.43-51
    • /
    • 2016
  • This paper presents the concept of "interactive teleprompter", which provides the user with interaction with oneself or other users for live television broadcasts or smart mirrors. In such interactive applications, eye contacts between the user and the regenerated image or between the user and other persons are important in handling psychological processes or non-verbal communications. Unfortunately, it is not straightforward to address the eye contact issues with conventional combination of normal display and video camera. To address this problem, we propose an "interactive" teleprompter enhanced from conventional teleprompter devices. Our interactive teleprompter can recognize the user's gestures by applying infra-red (IR) depth sensor. This paper also presents test results for a beam splitter which plays a critical role for teleprompter and is designed to handle both visual light for RGB camera and IR for Depth sensor effectively.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.