• 제목/요약/키워드: Gestures

검색결과 476건 처리시간 0.025초

Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성 (Generation of Robot Facial Gestures based on Facial Actions and Animation Principles)

  • 박정우;김우현;이원형;이희승;정명진
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

손 제스쳐를 이용한 조이스틱 방식의 마우스제어 방법 (A Joystick-driven Mouse Controlling Method using Hand Gestures)

  • 정진영;김정인
    • 한국멀티미디어학회논문지
    • /
    • 제19권1호
    • /
    • pp.60-67
    • /
    • 2016
  • PC users have long been controlling their computers using input devices such as mouse and keyboard. To improve inconveniences of these devices, the method of screen-touching has widely been used these days, and devices recognizing human gestures are being developed one after another. Fox example, Kinect, developed and distributed by Microsoft, is a non-contact input device that recognizes human gestures through motion-recognizing sensors, thus replacing the mouse as an input device. However, when controlling the mouse on a large screen, it suffers from the problem of requiring large motions in order to move the mouse pointer to the edges of the screen. In this paper, we propose a joystick-driven mouse-controlling method which enables the user to move the mouse pointer to the corners of the screen with small motions. The experimental results show that movements of the user's palm within the range of 30 cm ensure movements of the mouse pointer to the edges of the screen.

다수의 스마트 디바이스를 활용한 멀티 디스플레이 동적 생성 및 인터랙션 (Dynamic Association and Natural Interaction for Multi-Displays Using Smart Devices)

  • 김민석;이재열
    • 한국CDE학회논문집
    • /
    • 제20권4호
    • /
    • pp.337-347
    • /
    • 2015
  • This paper presents a dynamic association and natural interaction method for multi-displays composed of smart devices. Users can intuitively associate relations among smart devices by shake gestures, flexibly modify the layout of the display by tilt gestures, and naturally interact with the multi-display by multi-touch interactions. First of all, users shake their smart devices to create and bind a group for a multi-display with a matrix configuration in an ad-hoc and collaborative situation. After the creation of the group, if needed, their display layout can be flexibly changed by tilt gestures that move the tilted device to the nearest vacant cell in the matrix configuration. During the tilt gestures, the system automatically modifies the relation, view, and configuration of the multi-display. Finally, users can interact with the multi-display through multi-touch interactions just as they interact with a single large display. Furthermore, depending on the context or role, synchronous or asynchronous mode is available to them for providing a split view or another UI. We will show the effectiveness and advantages of the proposed approach by demonstrating implementation results and evaluating the method by the usability study.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

손동작 인식 시스템을 위한 동적 학습 알고리즘 (Dynamic Training Algorithm for Hand Gesture Recognition System)

  • 김문환;황선기;배철수
    • 한국정보전자통신기술학회논문지
    • /
    • 제2권2호
    • /
    • pp.51-56
    • /
    • 2009
  • 본 논문에서는 카메라-투영 시스템에서 비전에 기반을 둔 손동작 인식을 위한 새로운 알고리즘을 제안하고 있다. 제안된 인식방법은 정적인 손동작 분류를 위하여 푸리에 변환을 사용하였다. 손 분할은 개선된 배경 제거 방법을 사용하였다. 대부분의 인식방법들이 같은 피검자에 의해 학습과 실험이 이루어지고 상호작용에 이전에 학습단계가 필요하다. 그러나 학습되지 않은 다양한 상황에 대해서도 상호작용을 위해 동작 인식이 요구된다. 그러므로 본 논문에서는 인식 작업 중에 검출된 불완전한 동작들을 정정하여 적용하였다. 그 결과 사용자와 독립되게 동작을 인식함으로써 새로운 사용자에게 신속하게 온라인 적용이 가능하였다.

  • PDF

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권3호
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

손동작 인식 시스템을 위한 동적 학습 알고리즘 (Dynamic Training Algorithm for Hand Gesture Recognition System)

  • 배철수
    • 한국정보통신학회논문지
    • /
    • 제11권7호
    • /
    • pp.1348-1353
    • /
    • 2007
  • 본 논문에서는 카메라-투영 시스템에서 비전에 기반을 둔 손동작 인식을 위한 새로운 알고리즘을 제안하고 있다. 제안된 인식방법은 정적인 손동작 분류를 위하여 푸리에 변환을 사용하였다. 손 분할은 개선된 배경 제거 방법을 사용하였다. 대부분의 인식방법들이 같은 피검자에 의해 학습과 실험이 이루어지고 상호작용에 이전에 학습단계가 필요하다. 그러나 학습되지 않은 다양한 상황에 대해서도 상호작용을 위해 동작 인식이 요구된다. 그러므로 본 논문에서는 인식 작업 중에 검출된 불완전한 동작들을 정정하여 적용하였다. 그 결과 사용자와 독립되게 동작을 인식함으로써 새로운 사용자에게 신속하게 온라인 적용이 가능하였다.

동적 베이스망 기반의 양손 제스처 인식 (Dynamic Bayesian Network based Two-Hand Gesture Recognition)

  • 석흥일;신봉기
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제35권4호
    • /
    • pp.265-279
    • /
    • 2008
  • 손 제스처를 이용한 사람과 컴퓨터간의 상호 작용은 오랜 기간 많은 사람들이 연구해 오고 있으며 커다란 발전을 보이고 있지만, 여전히 만족스러운 결과를 보이지는 못하고 있다. 본 논문에서는 동적 베이스망 프레임워크를 이용한 손 제스처 인식 방법을 제안한다. 유선 글러브를 이용하는 방법들과는 달리, 카메라 기반의 방법에서는 영상 처리와 특징 추출 단계의 결과들이 인식 성능에 큰 영향을 미친다. 제안하는 제스처 모델에서의 추론에 앞서 피부 색상 모델링 및 검출과 움직임 추적을 수행한다. 특징들간의 관계와 새로운 정보들을 쉽게 모델에 반영할 수 있는 동적 베이스망을 이용하여 두 손 제스처와 한 손 제스처 모두를 인식할 수 있는 새로운 모델을 제안한다. 10가지 독립 제스처에 대한 실험에서 최대 99.59%의 높은 인식 성능을 보였다. 제안하는 모델과 관련 방법들은 수화 인식과 같은 다른 문제들에도 적용 가능할 것으로 판단된다.