• 제목/요약/키워드: Gestures Mouse

검색결과 28건 처리시간 0.025초

심성모형 기반의 마우스 제스처 개발 (Mouse Gesture Design Based on Mental Model)

  • 서혜경
    • 대한산업공학회지
    • /
    • 제39권3호
    • /
    • pp.163-171
    • /
    • 2013
  • Various web browsers offer mouse gesture functions because they are convenient input methods. Mouse gestures enable users to move to the previous page or tab without clicking its relevant icon or menu of the web browser. To maximize the efficiency of mouse gestures, they should be designed to match users' mental models. Mental models of human beings are used to make accurate predictions and reactions when certain information has been recognized by humans. This means providing users with appropriate information about mental models will lead to fast understanding and response. A cognitive response test was performed in order to evaluate whether the mouse gestures easily associate with their respective functional meanings or not. After extracting mouse gestures which needed improvement, those were redesigned to reduce cognitive load via sketch maps. The methods presented in this study will be of help for evaluating and designing mouse gestures.

손 제스쳐를 이용한 조이스틱 방식의 마우스제어 방법 (A Joystick-driven Mouse Controlling Method using Hand Gestures)

  • 정진영;김정인
    • 한국멀티미디어학회논문지
    • /
    • 제19권1호
    • /
    • pp.60-67
    • /
    • 2016
  • PC users have long been controlling their computers using input devices such as mouse and keyboard. To improve inconveniences of these devices, the method of screen-touching has widely been used these days, and devices recognizing human gestures are being developed one after another. Fox example, Kinect, developed and distributed by Microsoft, is a non-contact input device that recognizes human gestures through motion-recognizing sensors, thus replacing the mouse as an input device. However, when controlling the mouse on a large screen, it suffers from the problem of requiring large motions in order to move the mouse pointer to the edges of the screen. In this paper, we propose a joystick-driven mouse-controlling method which enables the user to move the mouse pointer to the corners of the screen with small motions. The experimental results show that movements of the user's palm within the range of 30 cm ensure movements of the mouse pointer to the edges of the screen.

터치스크린 기반 웹브라우저 조작을 위한 손가락 제스처 개발 (Development of Finger Gestures for Touchscreen-based Web Browser Operation)

  • 남종용;최재호;정의승
    • 대한인간공학회지
    • /
    • 제27권4호
    • /
    • pp.109-117
    • /
    • 2008
  • Compared to the existing PC which uses a mouse and a keyboard, the touchscreen-based portable PC allows the user to use fingers, requiring new operation methods. However, current touchscreen-based web browser operations in many cases involve merely having fingers move simply like a mouse and click, or not corresponding well to the user's sensitivity and the structure of one's index finger, making itself difficult to be used during walking. Therefore, the goal of this study is to develop finger gestures which facilitate the interaction between the interface and the user, and make the operation easier. First, based on the frequency of usage in the web browser and preference, top eight functions were extracted. Then, the users' structural knowledge was visualized through sketch maps, and the finger gestures which were applicable in touchscreens were derived through the Meaning in Mediated Action method. For the front/back page, and up/down scroll functions, directional gestures were derived, and for the window closure, refresh, home and print functions, letter-type and icon-type gestures were drawn. A validation experiment was performed to compare the performance between existing operation methods and the proposed one in terms of execution time, error rate, and preference, and as a result, directional gestures and letter-type gestures showed better performance than the existing methods. These results suggest that not only during the operation of touchscreen-based web browser in portable PC but also during the operation of telematics-related functions in automobile, PDA and so on, the new gestures can be used to make operation easier and faster.

제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구 (Study about Windows System Control Using Gesture and Speech Recognition)

  • 김주홍;진성일이남호이용범
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

연속적인 손 제스처의 실시간 인식을 위한 계층적 베이지안 네트워크 (A Hierarchical Bayesian Network for Real-Time Continuous Hand Gesture Recognition)

  • 허승주;이성환
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권12호
    • /
    • pp.1028-1033
    • /
    • 2009
  • 본 논문은 컴퓨터 마우스를 제어하기 위한 실시간 손 제스처 인식 방법을 제안한다. 다양한 제스처를 표현하기 위해, 손 제스처를 연속적인 손 모양의 시퀀스로 정의하고, 이러한 손 제스처를 인식하기 위한 계층적 베이지안 네트워크를 디자인한다. 제안하는 방법은 손 포스처와 제스처 인식을 위한 계층적 구조를 가지며, 이는 특징 추출과정에서 발생하는 잡음에 강인하다는 장점을 가진다. 제안하는 방법의 유용성을 증명하기 위해, 제스처 기반 가상 마우스 인터페이스를 개발하였다. 실험에서 제안한 방법은 단순한 배경에서는 94.8%, 복잡한 배경에서는 88.1%의 인식률을 보였으며, HMM 기반의 기존 방법보다 우수한 성능을 보였다.

비디오 게임 인터페이스를 위한 인식 기반 제스처 분할 (Recognition-Based Gesture Spotting for Video Game Interface)

  • 한은정;강현;정기철
    • 한국멀티미디어학회논문지
    • /
    • 제8권9호
    • /
    • pp.1177-1186
    • /
    • 2005
  • 키보드나 조이스틱 대신 카메라를 통해 입력되는 사용자의 제스처를 이용하는 시각 기반 비디오 게임 인터페이스를 사용할 때 자연스러운 동작을 허용하기 위해서는, 연속 제스처를 인식할 수 있고 사용자의 의미없는 동작이 허용되어야 한다. 본 논문에서는 비디오 게임 인터페이스를 위한 인식과 분할을 결합한 제스처 인식 방법을 제안하며, 이는 주어진 연속 영상에서 의미있는 동작을 인식함과 동시에 의미없는 동작을 구별하는 방법이다. 제안된 방법을 사용자의 상체 제스처를 게임의 명령어로 사용하는 1인칭 액션 게임인 Quke II 게임에 적용한 결과, 연속 제스처에 대해 평균 $93.36\%$의 분할 결과로써 비디오 게임 인터페이스에서 유용한 성능을 낼 수 있음을 보였다.

  • PDF

영상처리를 이용한 지화인식 기반의 차세대 인터페이스 시스템 개발 (A Development of the Next-generation Interface System Based on the Finger Gesture Recognizing in Use of Image Process Techniques)

  • 김남호
    • 한국정보통신학회논문지
    • /
    • 제15권4호
    • /
    • pp.935-942
    • /
    • 2011
  • 본 연구는 카메라를 통하여 취득한 지화를 자동으로 인식하고, 컴퓨터를 제어하는 지화인식 시스템을 설계하고 구현하는데 목적이 있다. 먼저 영상을 취득하기 위하여 일반 카메라를 개조하여 적외선 CCD 카메라를 제작하였고, 입력영상의 전처리와 손의 특징들을 찾아 분석하여 손 모양에 따라 해당되는 지화를 판독하고, 이벤트를 발생시켜 마우스 제어와 프리젠테이션을 수행할 수 있는 방법을 제시하였다. 본 연구에서 제안하는 지화인식 시스템은 미래형 정보기기의 마우스와 키보드를 대체할 수 있는 차세대 인터페이스로서의 사용가능성을 검증하였다.

슈팅 게임의 현실감 개선을 위한 립모션 기반 인터페이스 구현 (Implementing Leap-Motion-Based Interface for Enhancing the Realism of Shooter Games)

  • 신인호;천동훈;박한훈
    • 한국HCI학회논문지
    • /
    • 제11권1호
    • /
    • pp.5-10
    • /
    • 2016
  • 본 논문은 립모션을 사용하여 사용자의 손동작을 인식함으로써 보다 현실감 있는 슈팅 게임 조작 방식을 제공한다. 슈팅 게임에서 필수적인 발사, 위치 이동, 시점 변화, 줌 인/아웃 등의 기능을 구현했으며, 사용자 평가를 통해 게임 인터페이스를 친숙하고 직관적인 손동작으로 대체함으로써, 기존 마우스/키보드 대비 조작의 용이성, 흥미, 확장성 등의 측면에서 우수함을 확인하였다. 구체적으로, 마우스/키보드를 이용한 인터페이스의 사용자 만족도(1~5)는 평균 3.02인 반면, 손동작을 이용한 인터페이스는 3.57이었다.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • 제24권6호
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF