• Title/Summary/Keyword: kinect sensor

Search Result 171, Processing Time 0.021 seconds

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

A Study on Parallax Registration for User Location on the Transparent Display using the Kinect Sensor (키넥트 센서를 활용한 투명 디스플레이에서의 사용자 위치에 대한 시계 정합 연구)

  • Nam, Byeong-Wook;Lee, Kyung-Ho;Lee, Jung-Min;Wu, Yuepeng
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.599-606
    • /
    • 2015
  • International Hydrographic Organization(IHO) adopted standard S-100 as the international standard Geographic Information System(GIS) that can be generally used in the maritime sector. Accordingly, the next-generation system to support navigation information based on GIS standard technology has being developed. AR based navigation information system that supported navigation by overlapping navigation information on the CCTV image has currently being developed. In this study, we considered the application of a transparent display as a method to support efficiently this system. When a transparent display applied, the image distortion caused by using a wide-angle lens for parallax secure, and the disc s, and demonstrated the applicability of the technology by developing a prototype.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

Recognition of Natural Hand Gesture by Using HMM (HMM을 이용한 자연스러운 손동작 인식)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.639-645
    • /
    • 2012
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Development of Home Training System with Self-Controlled Feedback for Stroke Patients (키넥트 센서를 이용한 자기통제 피드백이 가능한 가정용 훈련프로그램 개발)

  • Kim, Chang Geol;Song, Byung Seop
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.18 no.1
    • /
    • pp.37-45
    • /
    • 2013
  • Almost of stroke patients who experience aftereffects such as motor, sensory and cognitive disorders have to take some rehabilitation therapies. It is known that the consistent training for rehabilitation therapy in their home is more effective than rehabilitation therapy in hospital. A few home training programs were developed but these programs don't give any appropriate feedback messages to the client. Therefore, we developed a home training program which can provide appropriate feedback message to the clients using the Kinect sensor which can analyze user's 3-D dimensional motion. With this development, the client can obtain some feedback messages such as the knowledges of performance, results and self-controlled feedback. The program will be more effective than any existing programs.

ILOCAT: an Interactive GUI Toolkit to Acquire 3D Positions for Indoor Location Based Services (ILOCAT: 실내 위치 기반 서비스를 위한 3차원 위치 획득 인터랙티브 GUI Toolkit)

  • Kim, Seokhwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.866-872
    • /
    • 2020
  • Indoor location-based services provide a service based on the distance between an object and a person. Recently, indoor location-based services are often implemented using inexpensive depth sensors such as Kinect. The depth sensor provides a function to measure the position of a person, but the position of an object must be acquired manually using a tool. To acquire a 3D position of an object, it requires 3D interaction, which is difficult to a general user. GUI(Graphical User Interface) is relatively easy to a general user but it is hard to gather a 3D position. This study proposes the Interactive LOcation Context Authoring Toolkit(ILOCAT), which enables a general user to easily acquire a 3D position of an object in real space using GUI. This paper describes the interaction design and implementation of ILOCAT.

Investigation for Shoulder Kinematics Using Depth Sensor-Based Motion Analysis System (깊이 센서 기반 모션 분석 시스템을 사용한 어깨 운동학 조사)

  • Lee, Ingyu;Park, Jai Hyung;Son, Dong-Wook;Cho, Yongun;Ha, Sang Hoon;Kim, Eugene
    • Journal of the Korean Orthopaedic Association
    • /
    • v.56 no.1
    • /
    • pp.68-75
    • /
    • 2021
  • Purpose: The purpose of this study was to analyze the motion of the shoulder joint dynamically through a depth sensor-based motion analysis system for the normal group and patients group with shoulder disease and to report the results along with a review of the relevant literature. Materials and Methods: Seventy subjects participated in the study and were categorized as follows: 30 subjects in the normal group and 40 subjects in the group of patients with shoulder disease. The patients with shoulder disease were subdivided into the following four disease groups: adhesive capsulitis, impingement syndrome, rotator cuff tear, and cuff tear arthropathy. Repeating abduction and adduction three times, the angle over time was measured using a depth sensor-based motion analysis system. The maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), the maximum adduction angular velocity (ωmin), and the abduction/adduction time ratio (tabd/tadd) were calculated. The above parameters in the 30 subjects in the normal group and 40 subjects in the patients group were compared. In addition, the 30 subjects in the normal group and each subgroup (10 patients each) according to the four disease groups, giving a total of five groups, were compared. Results: Compared to the normal group, the maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), and the maximum adduction angular velocity (ωmin) were lower, and abduction/adduction time ratio (tabd/tadd) was higher in the patients with shoulder disease. A comparison of the subdivided disease groups revealed a lower maximum abduction angle (θmax) and the maximum abduction angular velocity (ωmax) in the adhesive capsulitis and cuff tear arthropathy groups than the normal group. In addition, the abduction/adduction time ratio (tabd/tadd) was higher in the adhesive capsulitis group, rotator cuff tear group, and cuff tear arthropathy group than in the normal group. Conclusion: Through an evaluation of the shoulder joint using the depth sensor-based motion analysis system, it was possible to measure the range of motion, and the dynamic motion parameter, such as angular velocity. These results show that accurate evaluations of the function of the shoulder joint and an in-depth understanding of shoulder diseases are possible.