• 제목/요약/키워드: Gestures

검색결과 474건 처리시간 0.025초

심성모형 기반의 마우스 제스처 개발 (Mouse Gesture Design Based on Mental Model)

  • 서혜경
    • 대한산업공학회지
    • /
    • 제39권3호
    • /
    • pp.163-171
    • /
    • 2013
  • Various web browsers offer mouse gesture functions because they are convenient input methods. Mouse gestures enable users to move to the previous page or tab without clicking its relevant icon or menu of the web browser. To maximize the efficiency of mouse gestures, they should be designed to match users' mental models. Mental models of human beings are used to make accurate predictions and reactions when certain information has been recognized by humans. This means providing users with appropriate information about mental models will lead to fast understanding and response. A cognitive response test was performed in order to evaluate whether the mouse gestures easily associate with their respective functional meanings or not. After extracting mouse gestures which needed improvement, those were redesigned to reduce cognitive load via sketch maps. The methods presented in this study will be of help for evaluating and designing mouse gestures.

Emergency Signal Detection based on Arm Gesture by Motion Vector Tracking in Face Area

  • Fayyaz, Rabia;Park, Dae Jun;Rhee, Eun Joo
    • 한국정보전자통신기술학회논문지
    • /
    • 제12권1호
    • /
    • pp.22-28
    • /
    • 2019
  • This paper presents a method for detection of an emergency signal expressed by arm gestures based on motion segmentation and face area detection in the surveillance system. The important indicators of emergency can be arm gestures and voice. We define an emergency signal as the 'Help Me' arm gestures in a rectangle around the face. The 'Help Me' arm gestures are detected by tracking changes in the direction of the horizontal motion vectors of left and right arms. The experimental results show that the proposed method successfully detects 'Help Me' emergency signal for a single person and distinguishes it from other similar arm gestures such as hand waving for 'Bye' and stretching. The proposed method can be used effectively in situations where people can't speak, and there is a language or voice disability.

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권4호
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.

소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처 (User-Defined Hand Gestures for Small Cylindrical Displays)

  • 김효영;김희선;이동언;박지형
    • 한국콘텐츠학회논문지
    • /
    • 제17권3호
    • /
    • pp.74-87
    • /
    • 2017
  • 본 연구는 아직 제품으로 등장하지 않은 플렉시블 디스플레이 기반의 소형 원통형 디스플레이를 위한 사용자 정의 기반의 핸드 제스처 도출을 목표로 한다. 이를 위하여 먼저 소형 원통형 디스플레이의 크기와 기능을 정의하고, 해당 기능 수행을 위한 태스크를 도출하였다. 이후 가상의 원통형 디스플레이 인터페이스와 이를 조작하기 위한 물리적 오브젝트를 각각 구현하여 사용자들이 실제 원통형 디스플레이를 조작하는 것과 유사한 환경을 제작하였고, 제스처 도출을 위한 태스크를 수행했을 경우 발생하는 결과를 가상의 원통형 디스플레이에 제시하여 사용자들이 해당 조작에 적합하다고 판단되는 제스처를 정의할 수 있도록 하였다. 도출된 각 태스크 별 제스처 그룹에서 빈도수를 토대로 대표 제스처를 선정하였으며, 각 제스처에 대한 의견 일치 점수를 도출하였다. 마지막으로 제스처 분석 및 사용자 인터뷰를 기반으로 제스처 도출에 활용되는 멘탈 모델을 관찰 하였다.

생체 신호와 몸짓을 이용한 감정인식 방법 (Emotion Recognition Method using Physiological Signals and Gestures)

  • 김호덕;양현창;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.322-327
    • /
    • 2007
  • 심리학 분야의 연구자들은 Electroencephalographic(EEG)을 오래전부터 인간 두뇌의 활동을 측정 기록하는데 사용하였다. 과학이 발달함에 따라 점차적으로 인간의 두뇌에서 감정을 조절하는 기본적인 영역들이 밝혀지고 있다. 그래서 인간의 감정을 조절하는 인간의 두뇌 활동 영역들을 EEG를 이용하여 측정하였다. 손짓이나 고개의 움직임은 사람들 사이에 대화를 위한 인간의 몸 언어로 사용된다. 그리고 그것들의 인식은 컴퓨터와 인간 사이에 유용한 회화수단으로 매우 중요하다. 몸짓에 관한 연구들은 주로 영상을 통한 인식 방법이 주를 이루고 있다. 많은 연구자들의 기존 연구에서는 생체신호나 몸짓중 한 가지만을 이용하여 감정인식 방법 연구를 하였다. 본 논문에서는 EEG 신호와 몸짓을 같이 사용해서 사람의 감정을 인식하였다. 그리고 인식의 대상자를 운전자라는 특정 대상자를 설정하고 실험을 하였다. 실험 결과 생체신호와 몸짓을 같이 사용한 실점의 인식률이 둘 중 한 가지만을 사용한 것보다 높은 인식률을 보였다. 생체신호와 몸짓들의 특징 신호들은 강화학습의 개념을 이용한 IFS(Interactive Feature Selection)를 이용하여 특징 선택을 하였다.

Dynamic gesture recognition using a model-based temporal self-similarity and its application to taebo gesture recognition

  • Lee, Kyoung-Mi;Won, Hey-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권11호
    • /
    • pp.2824-2838
    • /
    • 2013
  • There has been a lot of attention paid recently to analyze dynamic human gestures that vary over time. Most attention to dynamic gestures concerns with spatio-temporal features, as compared to analyzing each frame of gestures separately. For accurate dynamic gesture recognition, motion feature extraction algorithms need to find representative features that uniquely identify time-varying gestures. This paper proposes a new feature-extraction algorithm using temporal self-similarity based on a hierarchical human model. Because a conventional temporal self-similarity method computes a whole movement among the continuous frames, the conventional temporal self-similarity method cannot recognize different gestures with the same amount of movement. The proposed model-based temporal self-similarity method groups body parts of a hierarchical model into several sets and calculates movements for each set. While recognition results can depend on how the sets are made, the best way to find optimal sets is to separate frequently used body parts from less-used body parts. Then, we apply a multiclass support vector machine whose optimization algorithm is based on structural support vector machines. In this paper, the effectiveness of the proposed feature extraction algorithm is demonstrated in an application for taebo gesture recognition. We show that the model-based temporal self-similarity method can overcome the shortcomings of the conventional temporal self-similarity method and the recognition results of the model-based method are superior to that of the conventional method.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권5호
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

의사표현 손동작이 가능한 다기능 근전 전동의수 개발 (Development of a Multi-Function Myoelectric Prosthetic Hand with Communicative Hand Gestures)

  • 허윤;홍범기;홍응표;박세훈;문무성
    • 제어로봇시스템학회논문지
    • /
    • 제17권12호
    • /
    • pp.1248-1255
    • /
    • 2011
  • In daily life, another major role of human hand is a communicative function using hand gestures besides grasp function. Therefore, if amputees can express their intention by the prosthetic hand, they can much actively participate in social activities. Thus, this paper propose myoelectric multi-function prosthetic hand which can express 6 useful hand gestures such as Rock, Scissors, Paper, Indexing, Ok and Thumb-up. It was designed as under-actuated structure to minimize volume and weight of the prosthetic hand. Moreover, in order to effectively control various hand gestures by only two EMG sensors, we propose a control strategy that the signal type are expanded as "Strong" and "Light", and hand gestures are hierarchically classified for the intuitive control. Finally, we prove the validity of the developed prosthetic hand with the experiment.

비디오 게임 인터페이스를 위한 인식 기반 제스처 분할 (Recognition-Based Gesture Spotting for Video Game Interface)

  • 한은정;강현;정기철
    • 한국멀티미디어학회논문지
    • /
    • 제8권9호
    • /
    • pp.1177-1186
    • /
    • 2005
  • 키보드나 조이스틱 대신 카메라를 통해 입력되는 사용자의 제스처를 이용하는 시각 기반 비디오 게임 인터페이스를 사용할 때 자연스러운 동작을 허용하기 위해서는, 연속 제스처를 인식할 수 있고 사용자의 의미없는 동작이 허용되어야 한다. 본 논문에서는 비디오 게임 인터페이스를 위한 인식과 분할을 결합한 제스처 인식 방법을 제안하며, 이는 주어진 연속 영상에서 의미있는 동작을 인식함과 동시에 의미없는 동작을 구별하는 방법이다. 제안된 방법을 사용자의 상체 제스처를 게임의 명령어로 사용하는 1인칭 액션 게임인 Quke II 게임에 적용한 결과, 연속 제스처에 대해 평균 $93.36\%$의 분할 결과로써 비디오 게임 인터페이스에서 유용한 성능을 낼 수 있음을 보였다.

  • PDF

2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구 (Improvement of Gesture Recognition using 2-stage HMM)

  • 정훤재;박현준;김동한
    • 제어로봇시스템학회논문지
    • /
    • 제21권11호
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.