• Title/Summary/Keyword: Kinect

Search Result 409, Processing Time 0.024 seconds

Development and Evaluation of the V-Catch Vision System

  • Kim, Dong Keun;Cho, Yongjoo;Park, Kyoung Shin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.45-52
    • /
    • 2022
  • A tangible sports game is an exercise game that uses sensors or cameras to track the user's body movements and to feel a sense of reality. Recently, VR indoor sports room systems installed to utilize tangible sports game for physical activity in schools. However, these systems primarily use screen-touch user interaction. In this research, we developed a V-Catch Vision system that uses AI image recognition technology to enable tracking of user movements in three-dimensional space rather than two-dimensional wall touch interaction. We also conducted a usability evaluation experiment to investigate the exercise effects of this system. We tried to evaluate quantitative exercise effects by measuring blood oxygen saturation level, the real-time ECG heart rate variability, and user body movement and angle change of Kinect skeleton. The experiment result showed that there was a statistically significant increase in heart rate and an increase in the amount of body movement when using the V-Catch Vision system. In the subjective evaluation, most subjects found the exercise using this system fun and satisfactory.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

Optimization of Pose Estimation Model based on Genetic Algorithms for Anomaly Detection in Unmanned Stores (무인점포 이상행동 인식을 위한 유전 알고리즘 기반 자세 추정 모델 최적화)

  • Sang-Hyeop Lee;Jang-Sik Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.1
    • /
    • pp.113-119
    • /
    • 2023
  • In this paper, we propose an optimization of a pose estimation deep learning model for recognition of abnormal behavior in unmanned stores using radio frequencies. The radio frequency use millimeter wave in the 30 GHz to 300 GHz band. Due to the short wavelength and strong straightness, it is a frequency with less grayness and less interference due to radio absorption on the object. A millimeter wave radar is used to solve the problem of personal information infringement that may occur in conventional CCTV image-based pose estimation. Deep learning-based pose estimation models generally use convolution neural networks. The convolution neural network is a combination of convolution layers and pooling layers of different types, and there are many cases of convolution filter size, number, and convolution operations, and more cases of combining components. Therefore, it is difficult to find the structure and components of the optimal posture estimation model for input data. Compared with conventional millimeter wave-based posture estimation studies, it is possible to explore the structure and components of the optimal posture estimation model for input data using genetic algorithms, and the performance of optimizing the proposed posture estimation model is excellent. Data are collected for actual unmanned stores, and point cloud data and three-dimensional keypoint information of Kinect Azure are collected using millimeter wave radar for collapse and property damage occurring in unmanned stores. As a result of the experiment, it was confirmed that the error was moored compared to the conventional posture estimation model.

Gesture Control Gaming for Motoric Post-Stroke Rehabilitation

  • Andi Bese Firdausiah Mansur
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.37-43
    • /
    • 2023
  • The hospital situation, timing, and patient restrictions have become obstacles to an optimum therapy session. The crowdedness of the hospital might lead to a tight schedule and a shorter period of therapy. This condition might strike a post-stroke patient in a dilemma where they need regular treatment to recover their nervous system. In this work, we propose an in-house and uncomplex serious game system that can be used for physical therapy. The Kinect camera is used to capture the depth image stream of a human skeleton. Afterwards, the user might use their hand gesture to control the game. Voice recognition is deployed to ease them with play. Users must complete the given challenge to obtain a more significant outcome from this therapy system. Subjects will use their upper limb and hands to capture the 3D objects with different speeds and positions. The more substantial challenge, speed, and location will be increased and random. Each delegated entity will raise the scores. Afterwards, the scores will be further evaluated to correlate with therapy progress. Users are delighted with the system and eager to use it as their daily exercise. The experimental studies show a comparison between score and difficulty that represent characteristics of user and game. Users tend to quickly adapt to easy and medium levels, while high level requires better focus and proper synchronization between hand and eye to capture the 3D objects. The statistical analysis with a confidence rate(α:0.05) of the usability test shows that the proposed gaming is accessible, even without specialized training. It is not only for therapy but also for fitness because it can be used for body exercise. The result of the experiment is very satisfying. Most users enjoy and familiarize themselves quickly. The evaluation study demonstrates user satisfaction and perception during testing. Future work of the proposed serious game might involve haptic devices to stimulate their physical sensation.

Real-time Hand Region Detection and Tracking using Depth Information (깊이정보를 이용한 실시간 손 영역 검출 및 추적)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.177-186
    • /
    • 2012
  • In this paper, we propose a real-time approach for detecting and tracking a hand region by analyzing depth images. We build a hand model in advance. The model has the shape information of a hand. The detecting process extracts out moving areas in an image, which are possibly caused by moving a hand in front of a camera. The moving areas can be identified by analyzing accumulated difference images and applying the region growing technique. The extracted moving areas are compared against a hand model to get justified as a hand region. The tracking process keeps the track of center points of hand regions of successive frames. For this purpose, it involves three steps. The first step is to determine a seed point that is the closest point to the center point of a previous frame. The second step is to perform region growing to form a candidate region of a hand. The third step is to determine the center point of a hand to be tracked. This point is searched by the mean-shift algorithm within a confined area whose size varies adaptively according to the depth information. To verify the effectiveness of our approach, we have evaluated the performance of our approach while changing the shape and position of a hand as well as the velocity of hand movement.

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

Real-time Hand Region Detection based on Cascade using Depth Information (깊이정보를 이용한 케스케이드 방식의 실시간 손 영역 검출)

  • Joo, Sung Il;Weon, Sun Hee;Choi, Hyung Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.713-722
    • /
    • 2013
  • This paper proposes a method of using depth information to detect the hand region in real-time based on the cascade method. In order to ensure stable and speedy detection of the hand region even under conditions of lighting changes in the test environment, this study uses only features based on depth information, and proposes a method of detecting the hand region by means of a classifier that uses boosting and cascading methods. First, in order to extract features using only depth information, we calculate the difference between the depth value at the center of the input image and the average of depth value within the segmented block, and to ensure that hand regions of all sizes will be detected, we use the central depth value and the second order linear model to predict the size of the hand region. The cascade method is applied to implement training and recognition by extracting features from the hand region. The classifier proposed in this paper maintains accuracy and enhances speed by composing each stage into a single weak classifier and obtaining the threshold value that satisfies the detection rate while exhibiting the lowest error rate to perform over-fitting training. The trained classifier is used to classify the hand region, and detects the final hand region in the final merger stage. Lastly, to verify performance, we perform quantitative and qualitative comparative analyses with various conventional AdaBoost algorithms to confirm the efficiency of the hand region detection algorithm proposed in this paper.

Implementation of Markerless Augmented Reality with Deformable Object Simulation (변형물체 시뮬레이션을 활용한 비 마커기반 증강현실 시스템 구현)

  • Sung, Nak-Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.35-42
    • /
    • 2016
  • Recently many researches have been focused on the use of the markerless augmented reality system using face, foot, and hand of user's body to alleviate many disadvantages of the marker based augmented reality system. In addition, most existing augmented reality systems have been utilized rigid objects since they just desire to insert and to basic interaction with virtual object in the augmented reality system. In this paper, unlike restricted marker based augmented reality system with rigid objects that is based in display, we designed and implemented the markerless augmented reality system using deformable objects to apply various fields for interactive situations with a user. Generally, deformable objects can be implemented with mass-spring modeling and the finite element modeling. Mass-spring model can provide a real time simulation and finite element model can achieve more accurate simulation result in physical and mathematical view. In this paper, the proposed markerless augmented reality system utilize the mass-spring model using tetraheadron structure to provide real-time simulation result. To provide plausible simulated interaction result with deformable objects, the proposed method detects and tracks users hand with Kinect SDK and calculates the external force which is applied to the object on hand based on the position change of hand. Based on these force, 4th order Runge-Kutta Integration is applied to compute the next position of the deformable object. In addition, to prevent the generation of excessive external force by hand movement that can provide the natural behavior of deformable object, we set up the threshold value and applied this value when the hand movement is over this threshold. Each experimental test has been repeated 5 times and we analyzed the experimental result based on the computational cost of simulation. We believe that the proposed markerless augmented reality system with deformable objects can overcome the weakness of traditional marker based augmented reality system with rigid object that are not suitable to apply to other various fields including healthcare and education area.

Investigation for Shoulder Kinematics Using Depth Sensor-Based Motion Analysis System (깊이 센서 기반 모션 분석 시스템을 사용한 어깨 운동학 조사)

  • Lee, Ingyu;Park, Jai Hyung;Son, Dong-Wook;Cho, Yongun;Ha, Sang Hoon;Kim, Eugene
    • Journal of the Korean Orthopaedic Association
    • /
    • v.56 no.1
    • /
    • pp.68-75
    • /
    • 2021
  • Purpose: The purpose of this study was to analyze the motion of the shoulder joint dynamically through a depth sensor-based motion analysis system for the normal group and patients group with shoulder disease and to report the results along with a review of the relevant literature. Materials and Methods: Seventy subjects participated in the study and were categorized as follows: 30 subjects in the normal group and 40 subjects in the group of patients with shoulder disease. The patients with shoulder disease were subdivided into the following four disease groups: adhesive capsulitis, impingement syndrome, rotator cuff tear, and cuff tear arthropathy. Repeating abduction and adduction three times, the angle over time was measured using a depth sensor-based motion analysis system. The maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), the maximum adduction angular velocity (ωmin), and the abduction/adduction time ratio (tabd/tadd) were calculated. The above parameters in the 30 subjects in the normal group and 40 subjects in the patients group were compared. In addition, the 30 subjects in the normal group and each subgroup (10 patients each) according to the four disease groups, giving a total of five groups, were compared. Results: Compared to the normal group, the maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), and the maximum adduction angular velocity (ωmin) were lower, and abduction/adduction time ratio (tabd/tadd) was higher in the patients with shoulder disease. A comparison of the subdivided disease groups revealed a lower maximum abduction angle (θmax) and the maximum abduction angular velocity (ωmax) in the adhesive capsulitis and cuff tear arthropathy groups than the normal group. In addition, the abduction/adduction time ratio (tabd/tadd) was higher in the adhesive capsulitis group, rotator cuff tear group, and cuff tear arthropathy group than in the normal group. Conclusion: Through an evaluation of the shoulder joint using the depth sensor-based motion analysis system, it was possible to measure the range of motion, and the dynamic motion parameter, such as angular velocity. These results show that accurate evaluations of the function of the shoulder joint and an in-depth understanding of shoulder diseases are possible.