• 제목/요약/키워드: finger tracking

검색결과 47건 처리시간 0.022초

Finger Tapping 기기를 활용한 콘텐츠의 sEMG 신호 분석 (Analysis on sEMG Signals of Contents Using Finger Tapping Device)

  • 한상배;변상규;김재훈;신성욱;정성택
    • 한국인터넷방송통신학회논문지
    • /
    • 제19권6호
    • /
    • pp.153-160
    • /
    • 2019
  • 본 논문은 손가락의 운동능력을 개선시킬 수 있는 재활 기기와 콘텐츠를 구현함으로서 누구나 편리하고 재미있게 재활 훈련을 할 수 있도록 돕고자 한다. 그래서 손가락의 조절 능력, 정밀도, 민첩성을 측정할 수 있는 Finger Tapping Device를 제작하고, 이 기기를 사용할 수 있는 콘텐츠로 트래킹, 시각 반응, 손가락 조절을 구현하였다. 이에 대한 유용성 검증은 손가락 움직임에 가장 많이 관여하는 심지굴근에 sEMG를 부착한 후 3종류의 콘텐츠를 수행하는 동안 sEMG의 신호를 분석하여 확인하였다. 실험 결과로서 모든 콘텐츠 수행하는 동안 심지굴근의 근육 활성화가 이루어졌다. 또한 시각 반응에 따른 각 손가락 별 반응 시간을 측정하여 민첩성의 차이가 있음을 확인할 수 있었다.

RGB 카메라 기반 실시간 21 DoF 손 추적 (RGB Camera-based Real-time 21 DoF Hand Pose Tracking)

  • 최준영;박종일
    • 방송공학회논문지
    • /
    • 제19권6호
    • /
    • pp.942-956
    • /
    • 2014
  • 본 논문은 단안의 RGB 카메라를 이용하는 실시간 손 추적 방법을 제안한다. 손은 높은 degrees of freedom을 가지고 있기 때문에 손 추적은 높은 모호성을 가지고 있다. 따라서 제안하는 방법에서는 손 추적의 모호성을 줄이기 위해서 단계별 손 추적 전략을 채택하였다. 제안하는 방법의 추적 과정은 손바닥 포즈 추적, 손가락 yaw 움직임 추적, 그리고 손가락 pitch 움직임 추적, 세 단계로 구성되어 있으며, 각 단계는 순서대로 수행된다. 제안하는 방법은 손은 평면으로 간주할 수 있다고 가정하고, 평면 손 모델을 이용한다. 평면 손 모델은 손 모델을 현재의 사용자 손 모양에 맞춰서 변경하는 손 모델 재생성을 가능하게 하는데, 이는 제안하는 방법의 강건성과 정확도를 증가시킨다. 그리고 제안하는 방법은 실시간 연산이 가능하고 GPU 기반 연산을 요구하지 않기 때문에, Google Glass와 같은 모바일 장비를 포함한 다양한 환경에 적용가능하다. 본 논문은 다양한 실험을 통해서 제안하는 방법의 성능과 효용성을 입증한다.

H$_\infty$ 제어기법을 적용한 소형 SMA 그립퍼의 힘 추적 제어 (Force Tracking Control of a Small-Sized SMA Gripper H$_\infty$ Synthesis)

  • 한영민;최승복;정재천
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1996년도 추계학술대회 논문집
    • /
    • pp.391-395
    • /
    • 1996
  • This paper presents a robust force tracking control of a small-sized SMA gripper with two fingers using shape memory alloy(SMA) actuators. The mathematical governing equation of the proposed system is derived by Hamilton's principle and Lagrangian equation and then, the control system model is integrated with the first-order actuator dynamics. Uncertain system parameters such as time constant of the actuators are also included in the control model. A robust two degree of freedom(TDF) controller using H$_{\infty}$ control theory, which has inherent robustness to model uncertainties and external disturbances, is adopted to achieve end-point force tracking control of the two-finger gripper. Force tracking control performances for desired trajectories represented by sinusoidal and step functions are evaluated by undertaking both simulation and experimental works.

  • PDF

Development of a General Purpose PID Motion Controller Using a Field Programmable Gate Array

  • Kim, Sung-Su;Jung, Seul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.360-365
    • /
    • 2003
  • In this paper, we have developed a general purpose motion controller using an FPGA(Field Programmable Gate Array). The multi-PID controllers on a single chip are implemented as a system-on-chip for multi-axis motion control. We also develop a PC GUI for an efficient interface control. Comparing with the commercial motion controller LM 629 it has multi-independent PID controllers so that it has several advantages such as space effectiveness, low cost and lower power consumption. In order to test the performance of the proposed controller, robot finger is controlled. The robot finger has three fingers with 2 joints each. Finger movements show that position tracking was very effective. Another experiment of balancing an inverted pendulum on a cart has been conducted to show the generality of the proposed FPGA PID controller. The controller has well maintained the balance of the pendulum.

  • PDF

Real Time Recognition of Finger-Language Using Color Information and Fuzzy Clustering Algorithm

  • Kim, Kwang-Baek;Song, Doo-Heon;Woo, Young-Woon
    • Journal of information and communication convergence engineering
    • /
    • 제8권1호
    • /
    • pp.19-22
    • /
    • 2010
  • A finger language helping hearing impaired people in communication A sign language helping hearing impaired people in communication is not popular to ordinary healthy people. In this paper, we propose a method for real-time sign language recognition from a vision system using color information and fuzzy clustering system. We use YCbCr color model and canny mask to decide the position of hands and the boundary lines. After extracting regions of two hands by applying 8-directional contour tracking algorithm and morphological information, the system uses FCM in classifying sign language signals. In experiment, the proposed method is proven to be sufficiently efficient.

A Prototype of Flex Sensor Based Data Gloves to Track the Movements of Fingers

  • Bang, Junseung;You, Jinho;Lee, Youngho
    • 스마트미디어저널
    • /
    • 제8권4호
    • /
    • pp.53-57
    • /
    • 2019
  • In this paper, we propose a flex sensor-based data glove to track the movements of human fingers for virtual reality education. By putting flex sensors and utilizing an accelerometer, this data glove allows people to enjoy applications for virtual reality (VR) or augmented reality (AR). With the maximum and minimum values of the flex sensor at each finger joint, it determines an angle corresponding to the bending value of the flex sensor. It tracks the movements of fingers and hand gestures with respect to the angle values at finger joints. In order to prove the effectiveness of the proposed data glove, we implemented a VR classroom application.

CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지 (Skin Color Based Hand and Finger Detection for Gesture Recognition in CCTV Surveillance)

  • 강성관;정경용;임기욱;이정현
    • 한국콘텐츠학회논문지
    • /
    • 제11권10호
    • /
    • pp.1-10
    • /
    • 2011
  • 본 논문은 CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지 기술을 제안하였다. 논문의 목표는 피부색을 기반으로 한 손 영역 탐지 및 손동작 인식에 대한 강인한 방법을 제안하는 것이다. 탐지된 손 영역과 손동작 인식 기술은 에어 마우스 및 스마트 TV를 조정하는데 적용될 수 있으며 홈시어터 및 감성 센서를 기반으로 하는 장치들을 조종하기 위하여도 사용될 수 있다. 입력 영상으로부터 손 영역을 구분하기 위하여 색상 기반 윤곽선 추출 방법이 사용되어지고 윤곽이 구분된 손으로부터 y좌표값을 계산하여 손가락 끝점을 탐지한다. 손가락 끝점의 위치를 탐지한 후에, R채널만을 이용하여 추적을 하며 손동작 인식에 있어서 차영상 기법을 적용하여 잡영상 제거와 같은 강인한 면을 보여준다. 제안하는 방법으로 손가락 끝점의 추적과 손동작 인식에 관련된 많은 실험을 진행하였고, 실험 결과는 기존의 방법보다 성능 면에 있어서 96% 이상의 정확도를 보여준다.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권1호
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

Combining Communications and Tracking: A New Paradigm of Smartphone Games

  • Lee, Soong-Hee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권2호
    • /
    • pp.202-215
    • /
    • 2013
  • The generalization trend of smartphones has brought many smartphone games into daily lives. These games are mainly dependent on the interactions on the display of the phone using finger touches. On the other hand, functions for detecting the positions and actions of the phones such as gyro-sensors have been rapidly developed over the former orientation sensors and acceleration sensors. Though it has become technologically possible to detect the users' motion via the smartphone devices and to use the phone device directly as the game device, it is hard to find the actualized cases. This paper proposes a new paradigm that includes basic frameworks and algorithms for the games combining the motion tracking and mutual communications on the smartphones and presents the details of its implementation and results.

한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구 (A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development)

  • 김정환;홍승권;명노해
    • 대한인간공학회지
    • /
    • 제26권2호
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.