• Title/Summary/Keyword: Kinect Sensor

Search Result 165, Processing Time 0.02 seconds

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Development of Home Training System with Self-Controlled Feedback for Stroke Patients (키넥트 센서를 이용한 자기통제 피드백이 가능한 가정용 훈련프로그램 개발)

  • Kim, Chang Geol;Song, Byung Seop
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.18 no.1
    • /
    • pp.37-45
    • /
    • 2013
  • Almost of stroke patients who experience aftereffects such as motor, sensory and cognitive disorders have to take some rehabilitation therapies. It is known that the consistent training for rehabilitation therapy in their home is more effective than rehabilitation therapy in hospital. A few home training programs were developed but these programs don't give any appropriate feedback messages to the client. Therefore, we developed a home training program which can provide appropriate feedback message to the clients using the Kinect sensor which can analyze user's 3-D dimensional motion. With this development, the client can obtain some feedback messages such as the knowledges of performance, results and self-controlled feedback. The program will be more effective than any existing programs.

ILOCAT: an Interactive GUI Toolkit to Acquire 3D Positions for Indoor Location Based Services (ILOCAT: 실내 위치 기반 서비스를 위한 3차원 위치 획득 인터랙티브 GUI Toolkit)

  • Kim, Seokhwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.866-872
    • /
    • 2020
  • Indoor location-based services provide a service based on the distance between an object and a person. Recently, indoor location-based services are often implemented using inexpensive depth sensors such as Kinect. The depth sensor provides a function to measure the position of a person, but the position of an object must be acquired manually using a tool. To acquire a 3D position of an object, it requires 3D interaction, which is difficult to a general user. GUI(Graphical User Interface) is relatively easy to a general user but it is hard to gather a 3D position. This study proposes the Interactive LOcation Context Authoring Toolkit(ILOCAT), which enables a general user to easily acquire a 3D position of an object in real space using GUI. This paper describes the interaction design and implementation of ILOCAT.

Investigation for Shoulder Kinematics Using Depth Sensor-Based Motion Analysis System (깊이 센서 기반 모션 분석 시스템을 사용한 어깨 운동학 조사)

  • Lee, Ingyu;Park, Jai Hyung;Son, Dong-Wook;Cho, Yongun;Ha, Sang Hoon;Kim, Eugene
    • Journal of the Korean Orthopaedic Association
    • /
    • v.56 no.1
    • /
    • pp.68-75
    • /
    • 2021
  • Purpose: The purpose of this study was to analyze the motion of the shoulder joint dynamically through a depth sensor-based motion analysis system for the normal group and patients group with shoulder disease and to report the results along with a review of the relevant literature. Materials and Methods: Seventy subjects participated in the study and were categorized as follows: 30 subjects in the normal group and 40 subjects in the group of patients with shoulder disease. The patients with shoulder disease were subdivided into the following four disease groups: adhesive capsulitis, impingement syndrome, rotator cuff tear, and cuff tear arthropathy. Repeating abduction and adduction three times, the angle over time was measured using a depth sensor-based motion analysis system. The maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), the maximum adduction angular velocity (ωmin), and the abduction/adduction time ratio (tabd/tadd) were calculated. The above parameters in the 30 subjects in the normal group and 40 subjects in the patients group were compared. In addition, the 30 subjects in the normal group and each subgroup (10 patients each) according to the four disease groups, giving a total of five groups, were compared. Results: Compared to the normal group, the maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), and the maximum adduction angular velocity (ωmin) were lower, and abduction/adduction time ratio (tabd/tadd) was higher in the patients with shoulder disease. A comparison of the subdivided disease groups revealed a lower maximum abduction angle (θmax) and the maximum abduction angular velocity (ωmax) in the adhesive capsulitis and cuff tear arthropathy groups than the normal group. In addition, the abduction/adduction time ratio (tabd/tadd) was higher in the adhesive capsulitis group, rotator cuff tear group, and cuff tear arthropathy group than in the normal group. Conclusion: Through an evaluation of the shoulder joint using the depth sensor-based motion analysis system, it was possible to measure the range of motion, and the dynamic motion parameter, such as angular velocity. These results show that accurate evaluations of the function of the shoulder joint and an in-depth understanding of shoulder diseases are possible.

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.