• Title/Summary/Keyword: hand gesture interface

Search Result 115, Processing Time 0.025 seconds

Detection of Hand Gesture and its Recognition for Wearable Applications in IoMTW (IoMTW 에서의 웨어러블 응용을 위한 손 제스처 검출 및 인식)

  • Yang, Anna;Hong, Jeong Hun;Kang, Han;Chun, Sungmoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.33-35
    • /
    • 2016
  • 손 제스처는 스마트 글라스 등 웨어러블 기기의 NUI(Natural User Interface)를 구현하기 위한 수단으로 각광받고 있다. 최근 MPEG 에서는 IoT(Internet of Things) 및 웨어러블 환경에서의 미디어 소비를 지원하기 위한 IoMTW(Internet of Media-Things and Wearables) 표준화를 진행하고 있다. 본 논문에서는 손 제스처를 웨어러블 기기의 NUI 로 사용하여 웨어러블 기기 제어 및 미디어 소비를 제어하기 위한 손 제스처 검출과 인식 기법를 제시한다. 제시된 기법은 스테레오 영상으로부터 깊이 정보와 색 정보를 이용하여 손 윤곽선을 검출하여 이를 베지어(Bezier) 곡선으로 표현하고, 표현된 손 윤곽선으로부터 손가락 수 등의 특징을 바탕으로 제스처를 인식한다.

  • PDF

Hand Pose and Gesture Recognition Using Infrared Sensor (적외선 센서를 사용한 손 동작 인식)

  • Ahn, Joon-young;Lee, Sang-hwa;Cho, Nam-ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.119-122
    • /
    • 2016
  • 최근 IT기술 영역에서 미래기술로 촉망받는 증강현실(AR)과 가상현실(VR)환경을 구축함에 있어서, 마우스나 키보드 등의 별도 장치 없이 기기에 원하는 동작을 입력 하도록 하는 NUI(Natural User Interface)기술이 각광받고 있다. 또한 NUI를 구현하는데 중요한 기술 중 하나로 손동작 인식 기술, 얼굴 인식 기술 등이 대두되고 있다. 이에 본 논문은 적외선 센서의 일종인 Leapmotion 센서를 사용하여 손동작 인식을 구현하고자 하였다. 첫 번째로 우선 거리변환 행렬을 사용하여 손바닥의 중심을 찾았다. 이후 각각의 손가락을 convex hull 알고리즘을 사용하여 추출한다. 제안한 알고리즘에서는 손가락, 손바닥 부분의optical flow를 구한 후, 두 optical flow의 특성을 사용하여 손의 이동, 정지, 클릭 동작을 구분 할 수 있도록 하였다.

  • PDF

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

Dialing Interface Design for Safe Driving using Hand Gesture (손동작을 이용한 운전 안전성을 높이기 위한 전화 다이얼 인터페이스 설계)

  • Jang, WonAng;Lee, DoHoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.449-452
    • /
    • 2012
  • 운전 중에 주의를 분산시키는 요소는 대부분 인터페이스 조작에 있으며 교통사고의 직접적인 원인이 된다. 스마트 자동차에 대한 관심이 높아지면서 운전자 안전에 대한 다양한 연구가 모색되고 있다. 순간의 시선이동으로 인해 판단력과 조작능력을 상실 할 수 있는 현재의 인터페이스는 안전성이 보장 되지 못한다. 본 논문에서는 이러한 운전자의 주의를 분산시키는 요소로 부터 안전성을 확보하기 위해서 차량 내 카메라를 이용하여 손동작을 인식하여 직관적인 제스처로 전화번호를 입력하거나 검색할 수 있는 안전한 인터페이스를 제안한다. 제안한 시스템은 직관적 동작과 TTS(Text To Speech)를 활용하여 사용자 편의성과 안전성을 높였다.

Virtual Object Generation Technique Using Multimodal Interface With Speech and Hand Gesture (음성 및 손동작 결합 인터페이스를 통한 가상객체의 생성)

  • Kim, Changseob;Nam, Hyeongil;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.147-149
    • /
    • 2019
  • 가상현실 기술의 발전으로 보다 많은 사람이 가상현실 콘텐츠를 즐길 수 있게 되었다. PC나 스마트폰과 같은 이전의 콘텐츠 플랫폼과 달리 가상현실에서는 3차원 정보를 전달할 수 있는 인터페이스가 요구된다. 2차원에서 3차원으로의 변화는 보다 높은 자유도를 가지는 반면, 사용자는 새로운 인터페이스에 적응해야 하는 불편함 또한 존재한다. 이러한 불편함을 해소하기 위하여 본 논문에서는 가상현실상에서 음성과 손동작을 결합한 인터페이스를 제안한다. 제안하는 인터페이스는 음성과 손동작은 현실 세계에서의 의사소통을 모방하여 구현하였다. 현실 세계의 의사소통을 모방하였기 때문에 사용자는 추가적인 학습이 없이 가상현실 플랫폼에 보다 쉽게 적응할 수 있다. 또한, 본 논문에서는 가상객체를 생성하는 예제를 통하여 기존의 3차원 입력장치를 대신할 수 있음을 보인다.

  • PDF

Hand Interface using Intelligent Recognition for Control of Mouse Pointer (마우스 포인터 제어를 위해 지능형 인식을 이용한 핸드 인터페이스)

  • Park, Il-Cheol;Kim, Kyung-Hun;Kwon, Goo-Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1060-1065
    • /
    • 2011
  • In this paper, the proposed method is recognized the hands using color information with input image of the camera. It controls the mouse pointer using recognized hands. In addition, specific commands with the mouse pointer is designed to perform. Most users felt uncomfortable since existing interaction multimedia systems depend on a particular external input devices such as pens and mouse However, the proposed method is to compensate for these shortcomings by hand without the external input devices. In experimental methods, hand areas and backgrounds are separated using color information obtaining image from camera. And coordinates of the mouse pointer is determined using coordinates of the center of a separate hand. The mouse pointer is located in pre-filled area using these coordinates, and the robot will move and execute with the command. In experimental results, the recognition of the proposed method is more accurate but is still sensitive to the change of color of light.

User-centric Immersible and Interactive Electronic Book based on the Interface of Tabletop Display (테이블탑 디스플레이 기반 사용자 중심의 실감형 상호작용 전자책)

  • Song, Dae-Hyeon;Park, Jae-Wan;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.6
    • /
    • pp.117-125
    • /
    • 2009
  • In this paper, we propose user-centric immersible and interactive electronic book based on the interface of tabletop display. Electronic book is usually used for users that want to read the text book with multimedia contents like video, audio, animation and etc. It is based on tabletop display platform then the conventional input device like keyboard and mouse is not essentially needed. Users can interact with the contents based on the gestures defined for the interface of tabletop display using hand finger touches then it gives superior and effective interface for users to use the electronic book interestingly. This interface supports multiple users then it gives more diverse effects on the conventional electronic contents just made for one user. In this paper our method gives new way for the conventional electronics book and it can define the user-centric gestures and help users to interact with the book easily. We expect our method can be utilized for many edutainment contents.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.

Study on Signal Processing Method for Extracting Hand-Gesture Signals Using Sensors Measuring Surrounding Electric Field Disturbance (주변 전기장 측정센서를 이용한 손동작 신호 검출을 위한 신호처리시스템 연구)

  • Cheon, Woo Young;Kim, Young Chul
    • Smart Media Journal
    • /
    • v.6 no.2
    • /
    • pp.26-32
    • /
    • 2017
  • In this paper, we implement a signal-detecting electric circuit based LED lighting control system which is essential in NUI technology using EPIC converting surrounding earth electric field disturbance signals to electric potential signals. We used signal-detecting electric circuits which was developed to extract individual signal for each EPIC sensor while conventional EPIC-based development equipments provide limited forms of signals. The signals extracted from our developed circuit contributed to better performance as well as flexiblity in processes of feature extracting stage and pattern recognition stage. We designed a system which can control the brightness and on/off of LED lights with four hand gestures in order to justify its applicability to real application systems. We obtained faster pattern classification speed not only by developing an instruction system, but also by using interface control signals.

Designing Effective Virtual Training: A Case Study in Maritime Safety

  • Jung, Jinki;Kim, Hongtae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.5
    • /
    • pp.385-394
    • /
    • 2017
  • Objective: The aim of this study is to investigate how to design effective virtual reality-based training (i.e., virtual training) in maritime safety and to present methods for enhancing interface fidelity by employing immersive interaction and 3D user interface (UI) design. Background: Emerging virtual reality technologies and hardware enable to provide immersive experiences to individuals. There is also a theory that the improvement of fidelity can improve the training efficiency. Such a sense of immersion can be utilized as an element for realizing effective training in the virtual space. Method: As an immersive interaction, we implemented gesture-based interaction using leap motion and Myo armband type sensors. Hand gestures captured from both sensors are used to interact with the virtual appliance in the scenario. The proposed 3D UI design is employed to visualize appropriate information for tasks in training. Results: A usability study to evaluate the effectiveness of the proposed method has been carried out. As a result, the usability test of satisfaction, intuitiveness of UI, ease of procedure learning, and equipment understanding showed that virtual training-based exercise was superior to existing training. These improvements were also independent of the type of input devices for virtual training. Conclusion: We have shown through experiments that the proposed interaction design results are more efficient interactions than the existing training method. The improvement of interface fidelity through intuitive and immediate feedback on the input device and the training information improve user satisfaction with the system, as well as training efficiency. Application: Design methods for an effective virtual training system can be applied to other areas by which trainees are required to do sophisticated job with their hands.