• 제목/요약/키워드: User-Defined Gesture

검색결과 15건 처리시간 0.025초

멀티모달 사용자 인터페이스를 위한 펜 제스처인식기의 구현 (Implementation of Pen-Gesture Recognition System for Multimodal User Interface)

  • 오준택;이우범;김욱현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(3)
    • /
    • pp.121-124
    • /
    • 2000
  • In this paper, we propose a pen gesture recognition system for user interface in multimedia terminal which requires fast processing time and high recognition rate. It is realtime and interaction system between graphic and text module. Text editing in recognition system is performed by pen gesture in graphic module or direct editing in text module, and has all 14 editing functions. The pen gesture recognition is performed by searching classification features that extracted from input strokes at pen gesture model. The pen gesture model has been constructed by classification features, ie, cross number, direction change, direction code number, position relation, distance ratio information about defined 15 types. The proposed recognition system has obtained 98% correct recognition rate and 30msec average processing time in a recognition experiment.

  • PDF

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.

A Unit Touch Gesture Model of Performance Time Prediction for Mobile Devices

  • Kim, Damee;Myung, Rohae
    • 대한인간공학회지
    • /
    • 제35권4호
    • /
    • pp.277-291
    • /
    • 2016
  • Objective: The aim of this study is to propose a unit touch gesture model, which would be useful to predict the performance time on mobile devices. Background: When estimating usability based on Model-based Evaluation (MBE) in interfaces, the GOMS model measured 'operators' to predict the execution time in the desktop environment. Therefore, this study used the concept of operator in GOMS for touch gestures. Since the touch gestures are comprised of possible unit touch gestures, these unit touch gestures can predict to performance time with unit touch gestures on mobile devices. Method: In order to extract unit touch gestures, manual movements of subjects were recorded in the 120 fps with pixel coordinates. Touch gestures are classified with 'out of range', 'registration', 'continuation' and 'termination' of gesture. Results: As a results, six unit touch gestures were extracted, which are hold down (H), Release (R), Slip (S), Curved-stroke (Cs), Path-stroke (Ps) and Out of range (Or). The movement time predicted by the unit touch gesture model is not significantly different from the participants' execution time. The measured six unit touch gestures can predict movement time of undefined touch gestures like user-defined gestures. Conclusion: In conclusion, touch gestures could be subdivided into six unit touch gestures. Six unit touch gestures can explain almost all the current touch gestures including user-defined gestures. So, this model provided in this study has a high predictive power. The model presented in the study could be utilized to predict the performance time of touch gestures. Application: The unit touch gestures could be simply added up to predict the performance time without measuring the performance time of a new gesture.

Hand Gesture Segmentation Method using a Wrist-Worn Wearable Device

  • Lee, Dong-Woo;Son, Yong-Ki;Kim, Bae-Sun;Kim, Minkyu;Jeong, Hyun-Tae;Cho, Il-Yeon
    • 대한인간공학회지
    • /
    • 제34권5호
    • /
    • pp.541-548
    • /
    • 2015
  • Objective: We introduce a hand gesture segmentation method using a wrist-worn wearable device which can recognize simple gestures of clenching and unclenching ones' fist. Background: There are many types of smart watches and fitness bands in the markets. And most of them already adopt a gesture interaction to provide ease of use. However, there are many cases in which the malfunction is difficult to distinguish between the user's gesture commands and user's daily life motion. It is needed to develop a simple and clear gesture segmentation method to improve the gesture interaction performance. Method: At first, we defined the gestures of making a fist (start of gesture command) and opening one's fist (end of gesture command) as segmentation gestures to distinguish a gesture. The gestures of clenching and unclenching one's fist are simple and intuitive. And we also designed a single gesture consisting of a set of making a fist, a command gesture, and opening one's fist in order. To detect segmentation gestures at the bottom of the wrist, we used a wrist strap on which an array of infrared sensors (emitters and receivers) were mounted. When a user takes gestures of making a fist and opening one's a fist, this changes the shape of the bottom of the wrist, and simultaneously changes the reflected amount of the infrared light detected by the receiver sensor. Results: An experiment was conducted in order to evaluate gesture segmentation performance. 12 participants took part in the experiment: 10 males, and 2 females with an average age of 38. The recognition rates of the segmentation gestures, clenching and unclenching one's fist, are 99.58% and 100%, respectively. Conclusion: Through the experiment, we have evaluated gesture segmentation performance and its usability. The experimental results show a potential for our suggested segmentation method in the future. Application: The results of this study can be used to develop guidelines to prevent injury in auto workers at mission assembly plants.

Gesture Interaction Design based on User Preference for the Elastic Handheld Device

  • Yoo, Hoon Sik;Ju, Da Young
    • 대한인간공학회지
    • /
    • 제35권6호
    • /
    • pp.519-533
    • /
    • 2016
  • Objective: This study lays its aims at the definition of relevant operation method and function by researching on the value to be brought when applying smart device that can hand carry soft and flexible materials like jelly. Background: New technology and material play a role in bringing type transformation of interface and change of operation system. Recently, importance has been increased on the study of Organic User Interface (OUI) that conducts research on the value of new method of input and output adopting soft and flexible materials for various instruments. Method: For fulfillment of the study, 27 kinds of gestures have been defined that are usable in handheld device based on existing studies. Quantitative research of survey was conducted of adult male and female of 20s through 30s and an analysis was done on the function that can be linked to gestures with highest level of satisfaction. In order to analyze needs and hurdles of users for the defined gesture, a focus group interview was conducted aiming at the groups of early adopters and ordinary users. Results: As a result, it was found that users have much value regarding usability and fun for elastic device and analysis could be conducted on preferred gesture and its linkable functions. Conclusion: What is most significant with this study is that it sheds new light on the values of a device made of elastic material. Beyond finding and defining the gestures and functions that can be applied to a handheld elastic device, the present study identified the value elements of an elastic device - 'usability and 'fun' -, which users can basically desire from using it. Application: The data that this study brought forth through preference and satisfaction test with the gestures and associated functions will help commercialize an elastic device in future.

소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처 (User-Defined Hand Gestures for Small Cylindrical Displays)

  • 김효영;김희선;이동언;박지형
    • 한국콘텐츠학회논문지
    • /
    • 제17권3호
    • /
    • pp.74-87
    • /
    • 2017
  • 본 연구는 아직 제품으로 등장하지 않은 플렉시블 디스플레이 기반의 소형 원통형 디스플레이를 위한 사용자 정의 기반의 핸드 제스처 도출을 목표로 한다. 이를 위하여 먼저 소형 원통형 디스플레이의 크기와 기능을 정의하고, 해당 기능 수행을 위한 태스크를 도출하였다. 이후 가상의 원통형 디스플레이 인터페이스와 이를 조작하기 위한 물리적 오브젝트를 각각 구현하여 사용자들이 실제 원통형 디스플레이를 조작하는 것과 유사한 환경을 제작하였고, 제스처 도출을 위한 태스크를 수행했을 경우 발생하는 결과를 가상의 원통형 디스플레이에 제시하여 사용자들이 해당 조작에 적합하다고 판단되는 제스처를 정의할 수 있도록 하였다. 도출된 각 태스크 별 제스처 그룹에서 빈도수를 토대로 대표 제스처를 선정하였으며, 각 제스처에 대한 의견 일치 점수를 도출하였다. 마지막으로 제스처 분석 및 사용자 인터뷰를 기반으로 제스처 도출에 활용되는 멘탈 모델을 관찰 하였다.

제스처인식을 이용한 퀴즈게임 콘텐츠의 사용자 인터페이스에 대한 연구 (A Study on User Interface for Quiz Game Contents using Gesture Recognition)

  • 안정호
    • 디지털콘텐츠학회 논문지
    • /
    • 제13권1호
    • /
    • pp.91-99
    • /
    • 2012
  • 우리는 본 논문에서 아날로그 영역의 퀴즈 게임을 디지털화시키는 작업을 소개한다. 우리는 퀴즈 진행, 퀴즈 참가자 파악, 문제 제시, 먼저 손든 참가자 인식, 정오답 판단, 점수 합산, 승리팀 판단 등 기존의 퀴즈 게임이 아날로그 방식으로 수행해온 작업을 디지털화시키는 작업을 수행하였다. 이를 자동화하기 위해 최근 주목받기 시작한 키넥트 카메라를 이용하여 깊이 영상을 입력받아, 사용자들의 위치를 파악하고 사용자 위주로 정의된 제스처를 인식하는 알고리즘을 고안하였다. 영상의 깊이 값의 분포를 분석하여 퀴즈 참가자들의 상체를 검출하고 사용자들의 분할하였고 손 영역을 검출하였다. 또한 손바닥, 주먹, 기타 손 모양을 인식하기 위한 특징 추출 및 판단 함수를 고안하여 사용자가 퀴즈 보기를 선택할 수 있게 하였다. 구현된 퀴즈 응용 프로그램은 실시간 테스트에서 매우 만족스러운 제스처 인식 결과를 보였으며 원활한 게임 진행이 가능하였다.

손 제스처 기반의 애완용 로봇 제어 (Hand gesture based a pet robot control)

  • 박세현;김태의;권경수
    • 한국산업정보학회논문지
    • /
    • 제13권4호
    • /
    • pp.145-154
    • /
    • 2008
  • 본 논문에서는 애완용 로봇에 장착된 카메라로부터 획득된 연속 영상에서 사용자의 손 제스처를 인식하여 로봇을 제어하는 시스템을 제안한다. 제안된 시스템은 손 검출, 특징 추출, 제스처 인식 로봇 제어의 4단계로 구성된다. 먼저 카메라로부터 입력된 영상에서 HSI 색상공간에 정의된 피부색 모델과 연결성분 분석을 이용하여 손 영역을 검출한다. 다음은 연속 영상에서 손 영역의 모양과 움직임에 따른 특징을 추출한다. 이때 의미 있는 제스처의 구분을 위해 손의 모양을 고려한다. 그 후에 손의 움직임에 의해 양자화된 심볼들을 입력으로 하는 은닉 마르코프 모델을 이용하여 손 제스처는 인식된다. 마지막으로 인식된 제스처에 대응하는 명령에 따라 애완용 로봇이 동작하게 된다. 애완용 로봇을 제어하기 위한 명령으로 앉아, 일어서, 엎드려, 악수 등의 제스처를 정의하였다. 실험결과로 제안한 시스템을 이용하여 사용자가 제스처로 애완용 로봇을 제어 할 수 있음을 보였다.

  • PDF

상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구 (A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller)

  • 김기창;강민성;지창욱;하지우;선동익;쉐강;신규식
    • 공학교육연구
    • /
    • 제19권1호
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.

동작인식기반의 3D 암각화 VR 콘텐츠 구현 (Development of 3D Petroglyph VR Contents based on Gesture Recognition)

  • 정영기
    • 한국전자통신학회논문지
    • /
    • 제9권1호
    • /
    • pp.25-32
    • /
    • 2014
  • 암각화는 문자가 있기전에 선사시대의 공동체를 이해하는데 핵심적인 역할을 하기 때문에 전 세계적으로 매우 중요한 문화유산이다. 요즘 3D 데이터는 미래세대에게 물려줄 수 있는 중요한 문화유산의 모양을 영구 기록하는데 필수적인 요소이다. 최근의 3D스캐닝 기술은 매우 사실적인 3D 모델생성이 가능하기 때문에 체험자를 3D세계로 끌어드릴수 있는 가상현실 박물관 전시에 활용될 수 있다. 본 연구에서는 새로운 동작인식 방법에 기반한 3D암각화 VR(Virtual Reality) 콘텐츠를 구현하였다. 제안된 동작인식방법은 3차원 깊이센서를 이용하여 얻어진 체험자의 움직임을 정의된 동작과 비교함으로써 동작을 인식한다. 또한 정밀하고 비파괴적인 수단으로서 3D스캐닝 기술을 이용하여 3D 암각화 데이터를 기록하기 위한 새로운 접근방법을 제안한다.