• Title/Summary/Keyword: 손동작 기반

Search Result 127, Processing Time 0.024 seconds

Android Platform based Gesture Recognition using Smart Phone Sensor Data (안드로이드 플랫폼기반 스마트폰 센서 정보를 활용한 모션 제스처 인식)

  • Lee, Yong Cheol;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.18-26
    • /
    • 2012
  • The increase of the number of smartphone applications has enforced the importance of new user interface emergence and has raised the interest of research in the convergence of multiple sensors. In this paper, we propose a method for the convergence of acceleration, magnetic and gyro sensors to recognize the gesture from motion of user smartphone. The proposed method first obtain the 3D orientation of smartphone and recognize the gesture of hand motion by using HMM(Hidden Markov Model). The proposed method for the representation for 3D orientation of smartphone in spherical coordinate was used for quantization of smartphone orientation to be more sensitive in rotation axis. The experimental result shows that the success rate of our method is 93%.

  • PDF

Hand Gesture Recognition with Convolution Neural Networks for Augmented Reality Cognitive Rehabilitation System Based on Leap Motion Controller (립모션 센서 기반 증강현실 인지재활 훈련시스템을 위한 합성곱신경망 손동작 인식)

  • Song, Keun San;Lee, Hyun Ju;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.186-192
    • /
    • 2021
  • In this paper, we evaluated prediction accuracy of Euler angle spectrograph classification method using a convolutional neural networks (CNN) for hand gesture recognition in augmented reality (AR) cognitive rehabilitation system based on Leap Motion Controller (LMC). Hand gesture recognition methods using a conventional support vector machine (SVM) show 91.3% accuracy in multiple motions. In this paper, five hand gestures ("Promise", "Bunny", "Close", "Victory", and "Thumb") are selected and measured 100 times for testing the utility of spectral classification techniques. Validation results for the five hand gestures were able to be correctly predicted 100% of the time, indicating superior recognition accuracy than those of conventional SVM methods. The hand motion recognition using CNN meant to be applied more useful to AR cognitive rehabilitation training systems based on LMC than sign language recognition using SVM.

Classification of Gripping Movement in Daily Life Using EMG-based Spider Chart and Deep Learning (근전도 기반의 Spider Chart와 딥러닝을 활용한 일상생활 잡기 손동작 분류)

  • Lee, Seong Mun;Pi, Sheung Hoon;Han, Seung Ho;Jo, Yong Un;Oh, Do Chang
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.299-307
    • /
    • 2022
  • In this paper, we propose a pre-processing method that converts to Spider Chart image data for classification of gripping movement using EMG (electromyography) sensors and Convolution Neural Networks (CNN) deep learning. First, raw data for six hand gestures are extracted from five test subjects using an 8-channel armband and converted into Spider Chart data of octagonal shapes, which are divided into several sliding windows and are learned. In classifying six hand gestures, the classification performance is compared with the proposed pre-processing method and the existing methods. Deep learning was performed on the dataset by dividing 70% of the total into training, 15% as testing, and 15% as validation. For system performance evaluation, five cross-validations were applied by dividing 80% of the entire dataset by training and 20% by testing. The proposed method generates 97% and 94.54% in cross-validation and general tests, respectively, using the Spider Chart preprocessing, which was better results than the conventional methods.

Interaction Augmented Reality System using a Hand Motion (손동작을 이용한 상호작용 증강현실 시스템)

  • Choi, Kwang-Woon;Jung, Da-Un;Lee, Suk-Han;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.425-438
    • /
    • 2012
  • In this paper, We propose Augmented Reality (AR) System for the interaction between user's hand motion and virtual object motion based on computer vision. The previous AR system provides inconvenience to user because the users have to control the marker and the sensor like a tracker. We solved the problem through hand motion and provide the convenience to the user. Also the motion of virtual object using a physical phenomenon gives a reality. The proposed system obtains geometrical information by the marker and hand. The system environments like virtual space of moving virtual ball and bricks are made by using the geometrical information and user's hand motion is obtained from the hand's information with extracted feature point through the taping hand. And it registers a virtual plane stably by getting movement of the feature points. The movement of the virtual ball basically is parabolic motion with a parabolic equation. When the collision occurs either the planes or the bricks, we show movement of the virtual ball with ball position and normal vector of plane and the ball position is faulted. So we showed corrected ball position through experiment. and we proved that this system can replaced the marker system to compare to jitter of augmented virtual object and progress speed with it.

Design and Implementation of a Sign Language Gesture Recognizer using Data Glove and Motion Tracking System (장갑 장치와 제스처 추적을 이용한 수화 제스처 인식기의 실계 및 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Kim, Dong-Gyu;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.233-237
    • /
    • 2005
  • 수화의 인식 및 표현 기술에 대한 관련 연구는 수화 인식을 통한 건청인과의 의사 전달, 가상현실에서의 손동작 인식 등을 대상으로 여러 방면으로의 접근 및 연구 결과를 도출하고 있다. 그러나 이들 연구의 대부분 데스크탑 PC기반의 수신호(Hand signal) 제어 및 수화 - 손 동작 인식에 목적을 두었고 수화 신호의 획득을 위하여 영상장비를 이용하였으며 이를 바탕으로 단어 위주의 수화 인식 및 표현에 중점을 둔 수화 인식 시스템의 구현을 통해 비장애인과의 자유로운 의사소통을 추구하고 있다. 따라서 본 논문에서는 햅틱 장치로부터 사용자의 의미있는 수화 제스처를 획득하기 위한 접근 방식을 차세대 착용형 PC 플랫폼 기반의 유비쿼터스 환경으로 확대, 적용시켜 제스처 데이터 입력 모듈로부터 새로운 정보의 획득에 있어 한계성을 극복하고 사용자의 편의를 도모할 수 있는 효율적인 데이터 획득 방안을 제시한다. 또한 퍼지 알고리즘 및 RDBMS 모듈을 이용하여 언제, 어디에서나 사용자의 의미 있는 문장형 수화 제스처를 실시간으로 인식하고 표현하는 수화 제스처 인식기를 구현하였다. 본 논문에서는 수화 제스처 입력 모듈(5th Data Glove System과 $Fastrak{\circledR}$)과 차세대 착용형 PC 플랫폼(embedded I.MX21 board)간의 이격거리를 반경 10M의 타원 형태로 구성하고 규정된 위치로 수화 제스처 데이터 입력모듈을 이동시키면서 5인의 피실험자에 대하여 연속적으로 20회의 반복 실험을 수행하였으며 사용자의 동적 제스처 인식 실험결과 92.2% 평균 인식률을 도출하였다.

  • PDF

3D Character Production for Dialog Syntax-based Educational Contents Authoring System (대화구문기반 교육용 콘텐츠 저작 시스템을 위한 3D 캐릭터 제작)

  • Kim, Nam-Jae;Ryu, Seuc-Ho;Kyung, Byung-Pyo;Lee, Dong-Yeol;Lee, Wan-Bok
    • Journal of the Korea Convergence Society
    • /
    • v.1 no.1
    • /
    • pp.69-75
    • /
    • 2010
  • The importance of a using the visual media in English education has been increased. By an importance of Characters in English language content, the more effort is needed for a learner to show the English pronunciation and a realistic implementation. In this paper, we tried to review the Syntax-based Educational Contents Authoring System. For the more realistic lip-sync character, 3D character to enhance the efficiency of the education was constructed. We used a chart of the association structure analysis of mouth's shape. we produced an optimized 3D character through a process of a concept, a modeling, a mapping and an animating design. For more effective educational content for 3D character creation, the next research will be continuously a 3d Character added to a hand motion and body motion in order to show an effective communication example.

Study on Wireless Control of a Board Robot Using a Sensing Glove (장갑 센서를 이용한 보드로봇의 무선제어 연구)

  • Ryu, Jaemyung;Kim, Dong Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.341-347
    • /
    • 2013
  • This study presents the remote control of a board robot using a Sensing glove based on Bluetooth communication. The board robot is a kind of riding robot controlled by an user. The user wears the proposed remote glove controller, and changes a direction of the robot by different kinds of finger actions. Bluetooth is used for wireless communication between the board robot and its user. CdS cell Sensors and a LED in the glove are used for recognition of a number of finger actions, which are measured as analog signals. The finger actions have five commands ('1'right '2'neutrality '3'left '4'operation '5'stop), which are transmitted from the user to the board robot through Bluetooth communication. Experimental results show that proposed a Sensing glove can effectively control the board robot.

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.

A Remote Control of 6 d.o.f. Robot Arm Based on 2D Vision Sensor (2D 영상센서 기반 6축 로봇 팔 원격제어)

  • Hyun, Woong-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.933-940
    • /
    • 2022
  • In this paper, the algorithm was developed to recognize hand 3D position through 2D image sensor and implemented a system to remotely control the 6 d.o.f. robot arm by using it. The system consists of a camera that acquires hand position in 2D, a computer that controls robot arm that performs movement by hand position recognition. The image sensor recognizes the specific color of the glove putting on operator's hand and outputs the recognized range and position by including the color area of the glove as a shape of rectangle. We recognize the velocity vector of end effector and control the robot arm by the output data of the position and size of the detected rectangle. Through the several experiments using developed 6 axis robot, it was confirmed that the 6 d.o.f. robot arm remote control was successfully performed.