• Title/Summary/Keyword: Gesture Interface

Search Result 231, Processing Time 0.025 seconds

Presentation Interface based on Gesture Recognition (제스처 인식을 통한 프리젠테이션 인터페이스)

  • Kim, Jin-uk;Kim, Se-hoon;Hong, Kwang-jin;Jung, Kee-chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1653-1656
    • /
    • 2013
  • 본 논문에서는 키넥트를 사용하여 제스처를 인식해 프리젠테이션이 가능한 인터페이스를 제작하였다. 키넥트 카메라는 Microsoft Kinect SDK1.7 라이브러리를 사용해서 신체의 좌표값을 받아 손의 위치와 손의 제스처를 인식하는데 사용했으며, 파워포인트를 제어하기 위해 후킹 기능을 사용한다. 기존 키넥트에서 사용하던 제스처인 sweep을 기본으로, grip과 push 제스처로 프리젠테이션에 필요한 기능을 추가했다. 제스처 인식의 결과를 후킹을 통해 파워포인트로 전달해서, 슬라이드의 이동 뿐 아니라 메모 기능과 지우기 기능이 추가된 인터페이스를 제공한다. 인터페이스는 발표자의 발표능력 향상과 더불어, 제스처 인식 인터페이스를 타 콘텐츠에 적용이 가능하므로 추후 콘텐츠의 제작 및 상용화가 가능하다.

The Recognition of a Human Arm Gesture for a Game Interface (게임 인터페이스를 위한 사람 팔 제스처 인식 시스템)

  • Yeo, DongHyeon;Kim, KyungHan;Kim, HyunJung;Won, IlYong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1513-1516
    • /
    • 2013
  • 본 연구는 최근 개발된 다양한 저비용 센서와 기계 학습 알고리즘을 이용한 게임을 위한 사람 팔 제스처 인식에 관한 것이다. 게임의 입력으로 사용할 수 있는 동작 10개를 정의하고, 이러한 동작들을 센서에서 수집된 팔 관절의 좌표를 추적하여 전처리했다. 자료의 시간성을 고려하여 HMM(Hidden Markov Model)을 학습 알고리즘으로 사용하였으며 제안한 방법의 유용성은 실험을 통해 검증했다.

A Color Marker Detection Algorithm for Gesture-based User Interfaces (제스처 기반 사용자 인터페이스를 위한 색상 마커 인식 알고리즘)

  • Lee, Doo-Hee;Kim, Yoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.401-404
    • /
    • 2010
  • 고성능 단말기와 다양한 컨텐츠가 제공되면서 제스처 기반 사용자 인터페이스에 대한 관심이 높아지고 있다. 하지만 기존의 알고리즘을 사용하기 위해서는 센서 장치를 사용하거나 사용자가 부자연스러운 장비를 착용해야 하는 경우가 많다. 본 논문에서는 카메라를 통해 입력된 영상 정보만으로 사용자가 착용한 색상 마커를 실시간으로 검출하는 알고리즘을 제안한다. 본 논문이 제안하는 마커 인식 알고리즘은 색상 감지와 움직임 감지로 나뉜다. 단일 프레임에서 영상 성분 평균을 이용한 조건검사를 통하여 색상 영역을 검출한다. 다음으로, 인접한 프레임간의 평균 영상과 현재 영상과의 차를 가중치로 이용하여 배경 범위를 설정하고 이 범위를 벗어난 영역을 움직임 영역으로 검출한다. 마지막으로 색상 검출 영역과 움직임 검출 영역을 동시에 만족하고 이웃한 픽셀들도 위 조건을 동시에 만족하면 최종적으로 사용자의 마커로 인식한다. 본 논문이 제안하는 알고리즘은 영상 정보만 사용하기 때문에 사용자는 센서나 부자연스러운 장비를 착용할 필요가 없고 일조량에 따른 조도의 변화에 강건하기 때문에 효과적인 사용자 움직임 검출이 가능하다.

A Study of a Virtual Reality Interface of Person Search in Multimedia Database for the US Defense Industry (미국 방위산업체 상황실의 인물검색 활동을 돕는 가상현실 공간 인터페이스 환경에 관한 연구)

  • Kim, Na-Young;Lee, Chong-Ho
    • Journal of Korea Game Society
    • /
    • v.11 no.5
    • /
    • pp.67-78
    • /
    • 2011
  • This paper introduces an efficient and satisfactory search interface that enables users to browse and find the video data they want from a massively huge video database widely used in various multimedia environment. The target user group is information analysts at US defense industry or governmental intelligence agencies whose job is to identify a certain person from a lot of video footage taken from CCTV(Closed-circuit Television) cameras. For the first user test, we suggested the CAVE-like virtual reality interface to be the most optimal for the tasks we designed for, so we compared this interface with desktop interface. The softwares and database developed and optimized for each task were used in this user test. For the second user test, we researched on what input devices would be most optimal for enhancing efficiency of search task in the CAVE-like virtual reality system. Especially we focused our effort on measuring the effectiveness and user satisfaction of three different types of devices that embody gestural interface input system that encourages users' ergonomic control of the interface. We also measured the time consumed for performing each task to find out the most efficient input device among the ones tested.

Hand posture recognition robust to rotation using temporal correlation between adjacent frames (인접 프레임의 시간적 상관 관계를 이용한 회전에 강인한 손 모양 인식)

  • Lee, Seong-Il;Min, Hyun-Seok;Shin, Ho-Chul;Lim, Eul-Gyoon;Hwang, Dae-Hwan;Ro, Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1630-1642
    • /
    • 2010
  • Recently, there is an increasing need for developing the technique of Hand Gesture Recognition (HGR), for vision based interface. Since hand gesture is defined as consecutive change of hand posture, developing the algorithm of Hand Posture Recognition (HPR) is required. Among the factors that decrease the performance of HPR, we focus on rotation factor. To achieve rotation invariant HPR, we propose a method that uses the property of video that adjacent frames in video have high correlation, considering the environment of HGR. The proposed method introduces template update of object tracking using the above mentioned property, which is different from previous works based on still images. To compare our proposed method with previous methods such as template matching, PCA and LBP, we performed experiments with video that has hand rotation. The accuracy rate of the proposed method is 22.7%, 14.5%, 10.7% and 4.3% higher than ordinary template matching, template matching using KL-Transform, PCA and LBP, respectively.

WiSee's trend analysis using Wi-Fi (Wi-Fi를 이용한 WiSee의 동향 분석)

  • Han, Seung-Ah;Son, Tae-Hyun;Kim, Hyun-Ho;Lee, Hoon-Jae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.74-77
    • /
    • 2015
  • WiSee is by utilizing the frequency of Wi-Fi(802.11n/ac), a technique for performing the operation recognized by the user's gesture. Current motion recognition scheme are using a dedicated device (leaf motion, Kinekuto) and the recognition range is 30cm ~ 3.5m. also For recognition range increases the narrow recognition rate, there is inconvenience for maintaining a limited distance. But WiSee is used by Wi-Fi it is possible to anywhere motion recognition if available location. Permeability also has advantages as compared with the conventional recognition method. In this paper I take a look at the operation process and the recent trend of WiSee.

  • PDF

A Driving Information Centric Information Processing Technology Development Based on Image Processing (영상처리 기반의 운전자 중심 정보처리 기술 개발)

  • Yang, Seung-Hoon;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.31-37
    • /
    • 2012
  • Today, the core technology of an automobile is becoming to IT-based convergence system technology. To cope with many kinds of situations and provide the convenience for drivers, various IT technologies are being integrated into automobile system. In this paper, we propose an convergence system, which is called Augmented Driving System (ADS), to provide high safety and convenience of drivers based on image information processing. From imaging sensor, the image data is acquisited and processed to give distance from the front car, lane, and traffic sign panel by the proposed methods. Also, a converged interface technology with camera for gesture recognition and microphone for speech recognition is provided. Based on this kind of system technology, car accident will be decreased although drivers could not recognize the dangerous situations, since the system can recognize situation or user context to give attention to the front view. Through the experiments, the proposed methods achieved over 90% of recognition in terms of traffic sign detection, lane detection, and distance measure from the front car.

An Application of AdaBoost Learning Algorithm and Kalman Filter to Hand Detection and Tracking (AdaBoost 학습 알고리즘과 칼만 필터를 이용한 손 영역 탐지 및 추적)

  • Kim, Byeong-Man;Kim, Jun-Woo;Lee, Kwang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.47-56
    • /
    • 2005
  • With the development of wearable(ubiquitous) computers, those traditional interfaces between human and computers gradually become uncomfortable to use, which directly leads to a requirement for new one. In this paper, we study on a new interface in which computers try to recognize the gesture of human through a digital camera. Because the method of recognizing hand gesture through camera is affected by the surrounding environment such as lighting and so on, the detector should be a little sensitive. Recently, Viola's detector shows a favorable result in face detection. where Adaboost learning algorithm is used with the Haar features from the integral image. We apply this method to hand area detection and carry out comparative experiments with the classic method using skin color. Experimental results show Viola's detector is more robust than the detection method using skin color in the environment that degradation may occur by surroundings like effect of lighting.

  • PDF

A Study of the Types of Metaphor Reflected on the Sonic Interactive Objets (동작 기반 인터랙티브 사운드 오브제에 나타난 메타포 유형 연구)

  • Kim, Hee-Eun;Park, Seung Ho
    • Design Convergence Study
    • /
    • v.15 no.2
    • /
    • pp.185-201
    • /
    • 2016
  • This study aims to define the basic design elements of a sonic interactive object and relate them with the different metaphor types including physical metaphor, gestural metaphor, sound metaphor, and embodied metaphor that are reflected on the interface, gesture, sound, and embodied experience. It discusses how the concept mapping can be effectively done by utilizing metaphor and considering the relationships between the types of metaphor when designing a sonic interactive object. This study has a significance in the aspect that it expanded the area of metaphor to embodied experience and subdivided the types of metaphor including visual, sound, gestural and tactile information. Furthermore, it attempted to analyze the design elements of a sonic interactive object in relation to the types of metaphor. A researcher or a designer, therefore, can use this study as a reference to design concept mapping of a sonic interactive object, in a way that the design elements and metaphor types work together effectively to enable a visitor to recognize what they are expected to do with the object more intuitively.

Designing Effective Virtual Training: A Case Study in Maritime Safety

  • Jung, Jinki;Kim, Hongtae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.5
    • /
    • pp.385-394
    • /
    • 2017
  • Objective: The aim of this study is to investigate how to design effective virtual reality-based training (i.e., virtual training) in maritime safety and to present methods for enhancing interface fidelity by employing immersive interaction and 3D user interface (UI) design. Background: Emerging virtual reality technologies and hardware enable to provide immersive experiences to individuals. There is also a theory that the improvement of fidelity can improve the training efficiency. Such a sense of immersion can be utilized as an element for realizing effective training in the virtual space. Method: As an immersive interaction, we implemented gesture-based interaction using leap motion and Myo armband type sensors. Hand gestures captured from both sensors are used to interact with the virtual appliance in the scenario. The proposed 3D UI design is employed to visualize appropriate information for tasks in training. Results: A usability study to evaluate the effectiveness of the proposed method has been carried out. As a result, the usability test of satisfaction, intuitiveness of UI, ease of procedure learning, and equipment understanding showed that virtual training-based exercise was superior to existing training. These improvements were also independent of the type of input devices for virtual training. Conclusion: We have shown through experiments that the proposed interaction design results are more efficient interactions than the existing training method. The improvement of interface fidelity through intuitive and immediate feedback on the input device and the training information improve user satisfaction with the system, as well as training efficiency. Application: Design methods for an effective virtual training system can be applied to other areas by which trainees are required to do sophisticated job with their hands.