• Title/Summary/Keyword: 손 동작

Search Result 413, Processing Time 0.03 seconds

Development of the Workspace-Analysis System of Invasive Robot using Physics Engine (물리 엔진을 이용한 수술 로봇의 동작 범위 분석 시스템 개발)

  • Kim, Do-Yoon;Park, Hyun-Keun;Seo, Jae-Yong;Jo, Yung-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1797-1798
    • /
    • 2008
  • 환자의 환부를 최소한으로 절개하여 시술하는 최소 침습 수술 수술은 많은 장점을 가지고 있어 그 활용도가 점차 확대되고 있다. 하지만 조작하는 조직으로부터 눈과 손이 분리되어 있기 때문에 많은 문제점들이 발생한다. 그 중 하나는 수술 영역과 시각 영역이 분리되어 발생하는데, 최적의 위치조정을 위한 자동 복강경 수술 로봇 팔 시스템 도입으로 이러한 문제를 해결하고 있다. 본 연구에서는 복강경 수술 로봇 팔을 설계하는데 있어 동작 범위를 빠르게 시각화하여 설계 단계에서 다양한 파리미터를 적용하여 보다 효율적인 복강경 수술 로봇 팔의 설계 방법을 제시한다. 제안된 물리 엔진을 이용한 동작 범위 분석 방법은 역기구학을 계산할 필요가 없으며, 설계가 바뀌어도 추가로 산출해야 하는 수식 없이 바로 수정된 기구학만으로 동작 범위 분석이 가능하다.

  • PDF

Design and implementation of a 3-axis Motion Sensor based SWAT Hand-signal Motion-recognition System (3축 모션 센서 기반 SWAT 수신호 모션 인식 시스템 설계 및 구현)

  • Yun, June;Pyun, Kihyun
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.33-42
    • /
    • 2014
  • Hand-signal is an effective communication means in the situation where voice cannot be used for expression especially for soldiers. Vision-based approaches using cameras as input devices are widely suggested in the literature. However, these approaches are not suitable for soldiers that have unseen visions in many cases. in addition, existing special-glove approaches utilize the information of fingers only. Thus, they are still lack for soldiers' hand-signal recognition that involves not only finger motions, but also additional information such as the rotation of a hand. In this paper, we have designed and implemented a new recognition system for six military hand-signal motions, i. e., 'ready', 'move', quick move', 'crawl', 'stop', and 'lying-down'. For this purpose, we have proposed a finger-recognition method and motion-recognition methods. The finger-recognition method discriminate how much each finger is bended, i. e., 'completely flattened', 'slightly flattened', 'slightly bended', and 'completely bended'. The motion-recognition algorithms are based on the characterization of each hand-signal motion in terms of the three axes. Through repetitive experiments, our system have shown 91.2% of correct recognition.

3D Human Motion Control System using Visual Script (시각 스크립트 기반 3차원 인체 동작 제어 시스템)

  • Cha, Gyeong-Ae;Kim, Sang-Wook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.5
    • /
    • pp.536-542
    • /
    • 2000
  • This paper proposes Visual Script Language which can direct a type of motion to 3D human model and create by dragging gesture like as we can express a certain meaning with hand gestures. Traditional motion control technique of articulated figures such as human needs a complex task that draws on highly developed human skills. So it will reduce the amount of motion specification to provide the motion control method that allow users to describe characters' motion at the higher level abstraction. Visual script is the visual gestures to direct various human motions, so users can express the spatial attributes of a motion such as the path of moving with high-level concepts if they use visual script. And we can show that it is possible to control the motion of human model directly and intuitively by development of 3D human motion control system based on visual script.

  • PDF

Hand Gesture Recognition Algorithm Robust to Complex Image (복잡한 영상에 강인한 손동작 인식 방법)

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1000-1015
    • /
    • 2010
  • In this paper, we propose a novel algorithm for hand gesture recognition. The hand detection method is based on human skin color, and we use the boundary energy information to locate the hand region accurately, then the moment method will be employed to locate the hand palm center. Hand gesture recognition can be separated into 2 step: firstly, the hand posture recognition: we employ the parallel NNs to deal with problem of hand posture recognition, pattern of a hand posture can be extracted by utilize the fitting ellipses method, which separates the detected hand region by 12 ellipses and calculates the white pixels rate in ellipse line. the pattern will be input to the NNs with 12 input nodes, the NNs contains 4 output nodes, each output node out a value within 0~1, the posture is then represented by composed of the 4 output codes. Secondly, the hand gesture tracking and recognition: we employed the Kalman filter to predict the position information of gesture to create the position sequence, distance relationship between positions will be used to confirm the gesture. The simulation have been performed on Windows XP to evaluate the efficiency of the algorithm, for recognizing the hand posture, we used 300 training images to train the recognizing machine and used 200 images to test the machine, the correct number is up to 194. And for testing the hand tracking recognition part, we make 1200 times gesture (each gesture 400 times), the total correct number is 1002 times. These results shows that the proposed gesture recognition algorithm can achieve an endurable job for detecting the hand and its' gesture.

A Hand Gesture Recognition System using 3D Tracking Volume Restriction Technique (3차원 추적영역 제한 기법을 이용한 손 동작 인식 시스템)

  • Kim, Kyung-Ho;Jung, Da-Un;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.201-211
    • /
    • 2013
  • In this paper, we propose a hand tracking and gesture recognition system. Our system employs a depth capture device to obtain 3D geometric information of user's bare hand. In particular, we build a flexible tracking volume and restrict the hand tracking area, so that we can avoid diverse problems caused by conventional object detection/tracking systems. The proposed system computes running average of the hand position, and tracking volume is actively adjusted according to the statistical information that is computed on the basis of uncertainty of the user's hand motion in the 3D space. Once the position of user's hand is obtained, then the system attempts to detect stretched fingers to recognize finger gesture of the user's hand. In order to test the proposed framework, we built a NUI system using the proposed technique, and verified that our system presents very stable performance even in the case that multiple objects exist simultaneously in the crowded environment, as well as in the situation that the scene is occluded temporarily. We also verified that our system ensures running speed of 24-30 frames per second throughout the experiments.

The Study on Dynamic Images Processing for Finger Languages (지화 인식을 위한 동영상 처리에 관한 연구)

  • Kang, Min-Ji;Choi, Eun-Sook;Sohn, Young-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.184-189
    • /
    • 2004
  • In this paper, we realized a system that receives the dynamic images of finger languages, which is the method of intention transmission of the hearing disabled person, using the white and black CCD camera, and that recognizes the images and converts them to the editable text document. We use the afterimage to draw a sharp line between indistinct images and clear images from a series of inputted images, and get the character alphabet from the away of continuous images and output the accomplished character to the word editor by applying the automata theory. After the system removes the varied wrist part from the data of clean image, it gets the controid point of hand by the maximum circular movement method and recognizes the hand that is necessary to analyze the finger languages by applying the circular pattern vector algorithm. The system abstracts the characteristic vectors of the hand using the distance spectrum from the center of the hand and it compares the characteristic vector of inputted pattern from the standard pattern by applying the fuzzy inference and recognizes the movement of finger languages.

Leap Motion Framework for Juggling Motion According to User Motion in Virtual Environment

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.51-57
    • /
    • 2021
  • In this paper, we propose a new framework that calculates the user's hand motions using a Leap Motion device, and uses this to practice and analyze arm muscles as well as juggling motions. The proposed method can map the movement of the ball in a virtual environment according to the user's hand motions in real time, and analyze the amount of exercise by visualizing the relaxation and contraction of the muscles. The proposed framework consists of three main parts : 1) It tracks the user's hand position with the Leap Motion device. 2) As with juggling, the action pattern of the user throwing the ball is defined as an event. 3) We propose a parabola-based particle method to map the movement of a juggling shape to a ball based on the user's hand position. As a result, using the our framework, it is possible to play a juggling game in real-time.

Proposal of a Hand Motion and Control Method Matching System for Interaction Optimized for Mixed Reality Control - Focusing on Meta Interface and Augmented Behavior (혼합현실 컨트롤에 최적화된 인터랙션을 위한 손 동작과 컨트롤 방식 Matching체계의 제안 - Augmented Behavior와 Meta Interface를 중심으로)

  • Lee, SaYa;Lee, EunJong
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.81-93
    • /
    • 2022
  • In an era where non-face-to-face meetings become common, eXtended Reality(XR) is rapidly developing and filling areas that are not satisfied in online meetings based on existing photos/video method. In particular, general users are also able to easily access and use HMD-type mixed reality devices. However, the basic operations applied in HMD-type in Mixed Reality(MR) with hands as the main input tool do not have a standardized system, and each manufacturer operates in a separate response to each other's hand movements. Therefore, this study considered that a systematic hand motion matching system considering the usability and efficiency of operations performed in mixed reality was necessary, and conducted a study to clarify this. First, the basic operation performed in the MR environment and its attributes were investigated, and at the same time, the structure of the hand and the attributes of the possible hand movements were identified. Based on the identified properties, it is intended to present a system that can intuitively and efficiently match basic operation properties in the MR environment with subtle operation properties according to the structure/context of the hand.

Recognition of hand gestures with different prior postures using EMG signals (사전 자세에 따른 근전도 기반 손 제스처 인식)

  • Hyun-Tae Choi;Deok-Hwa Kim;Won-Du Chang
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.51-56
    • /
    • 2023
  • Hand gesture recognition is an essential technology for the people who have difficulties using spoken language to communicate. Electromyogram (EMG), which is often utilized for hand gesture recognition, is expected to have difficulties in hand gesture recognition because its people's movements varies depending on prior postures, but the study on this subject is rare. In this study, we conducted tests to confirm if the prior postures affect on the accuracy of gesture recognition. Data were recorded from 20 subjects with different prior postures. We achieved average accuracies of 89.6% and 52.65% when the prior states between the training and test data were unique and different, respectively. The accuracy was increased when both prior states were considered, which confirmed the need to consider a variety of prior states in hand gesture recognition with EMG.

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.