• Title/Summary/Keyword: 키넥트센서

Search Result 126, Processing Time 0.021 seconds

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

A Study on Parallax Registration for User Location on the Transparent Display using the Kinect Sensor (키넥트 센서를 활용한 투명 디스플레이에서의 사용자 위치에 대한 시계 정합 연구)

  • Nam, Byeong-Wook;Lee, Kyung-Ho;Lee, Jung-Min;Wu, Yuepeng
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.599-606
    • /
    • 2015
  • International Hydrographic Organization(IHO) adopted standard S-100 as the international standard Geographic Information System(GIS) that can be generally used in the maritime sector. Accordingly, the next-generation system to support navigation information based on GIS standard technology has being developed. AR based navigation information system that supported navigation by overlapping navigation information on the CCTV image has currently being developed. In this study, we considered the application of a transparent display as a method to support efficiently this system. When a transparent display applied, the image distortion caused by using a wide-angle lens for parallax secure, and the disc s, and demonstrated the applicability of the technology by developing a prototype.

Obstacle Avoidance of Indoor Mobile Robot using RGB-D Image Intensity (RGB-D 이미지 인텐시티를 이용한 실내 모바일 로봇 장애물 회피)

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.10
    • /
    • pp.35-42
    • /
    • 2014
  • It is possible to improve the obstacle avoidance capability by training and recognizing the obstacles which is in certain indoor environment. We propose the technique that use underlying intensity value along with intensity map from RGB-D image which is derived from stereo vision Kinect sensor and recognize an obstacle within constant distance. We test and experiment the accuracy and execution time of the pattern recognition algorithms like PCA, ICA, LDA, SVM to show the recognition possibility of it. From the comparison experiment between RGB-D data and intensity data, RGB-D data got 4.2% better accuracy rate than intensity data but intensity data got 29% and 31% faster than RGB-D in terms of training time and intensity data got 70% and 33% faster than RGB-D in terms of testing time for LDA and SVM, respectively. So, LDA, SVM have good accuracy and better training/testing time to use for obstacle avoidance based on intensity dataset of mobile robot.

Evaluation of Accuracy and Inaccuracy of Depth Sensor based Kinect System for Motion Analysis in Specific Rotational Movement for Balance Rehabilitation Training (균형 재활 훈련을 위한 특정 회전 움직임에서 피검자 동작 분석을 위한 깊이 센서 기반 키넥트 시스템의 정확성 및 부정확성 평가)

  • Kim, ChoongYeon;Jung, HoHyun;Jeon, Seong-Cheol;Jang, Kyung Bae;Chun, Keyoung Jin
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.228-234
    • /
    • 2015
  • The balance ability significantly decreased in the elderly because of deterioration of the neural musculature regulatory mechanisms. Several studies have investigated methods of improving balance ability using real-time systems, but it is limited by the expensive test equipment and specialized resources. Recently, Kinect system based on depth data has been applied to address these limitations. Little information about accuracy/inaccuracy of Kinect system is, however, available, particular in motion analysis for evaluation of effectiveness in rehabilitation training. Therefore, the aim of the current study was to evaluate accuracy/inaccuracy of Kinect system in specific rotational movement for balance rehabilitation training. Six healthy male adults with no musculoskeletal disorder were selected to participate in the experiment. Movements of the participants were induced by controlling the base plane of the balance training equipment in directions of AP (anterior-posterior), ML (medial-lateral), right and left diagonal direction. The dynamic motions of the subjects were measured using two Kinect depth sensor systems and a three-dimensional motion capture system with eight infrared cameras for comparative evaluation. The results of the error rate for hip and knee joint alteration of Kinect system comparison with infrared camera based motion capture system occurred smaller values in the ML direction (Hip joint: 10.9~57.3%, Knee joint: 26.0~74.8%). Therefore, the accuracy of Kinect system for measuring balance rehabilitation traning could improve by using adapted algorithm which is based on hip joint movement in medial-lateral direction.

Airtouch technology smart fusion DID system design (Airtouch 기술을 활용한 스마트융합 DID 시스템 설계)

  • Lee, Gwang-Yong;Hwang, Bu-Hyun
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.2
    • /
    • pp.240-246
    • /
    • 2013
  • Airtouch technology to integrate the system in the way of information delivery devices, touch screen DID this study is to develop new ways of information delivery systems. Airtouch technology to design and implement a system that can be utilized to view the college campus announcements, education, information, and employment information, and store the remote operation and sharing content, the development of cloud services to sync content via smart technology implementation fusion DID systemto develop. Packs USB interface kinek because you may be used in connection with the information appliances, and low-cost product by leveraging the Kinect sensor, Airtouch technology implementation. Types of input devices paper Airtouch technology systems, the user's hand gestures alone can interact with information appliances, smart fusion system developed by DID by tracking the user's hand movements to manipulate the mouse pointer, and information through the user's hand gestures to command the unit so that you can make. Airtouch technology smart fusion DID system technology utilizing a ripple effect on other industries, such as the online education industry, advertising, information industry increases. Also, replace the existing interface device with the versatility of a wide range of technologies, usability is an infinite expansion.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Recognition of Natural Hand Gesture by Using HMM (HMM을 이용한 자연스러운 손동작 인식)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.639-645
    • /
    • 2012
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.