• Title/Summary/Keyword: 제스처 컴퓨팅

Search Result 16, Processing Time 0.023 seconds

Mobile Gesture Recognition using Dynamic Time Warping with Localized Template (지역화된 템플릿기반 동적 시간정합을 이용한 모바일 제스처인식)

  • Choe, Bong-Whan;Min, Jun-Ki;Jo, Seong-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.482-486
    • /
    • 2010
  • Recently, gesture recognition methods based on dynamic time warping (DTW) have been actively investigated as more mobile devices have equipped the accelerometer. DTW has no additional training step since it uses given samples as the matching templates. However, it is difficult to apply the DTW on mobile environments because of its computational complexity of matching step where the input pattern has to be compared with every templates. In order to address the problem, this paper proposes a gesture recognition method based on DTW that uses localized subset of templates. Here, the k-means clustering algorithm is used to divide each class into subclasses in which the most centered sample in each subclass is employed as the localized template. It increases the recognition speed by reducing the number of matches while it minimizes the errors by preserving the diversities of the training patterns. Experimental results showed that the proposed method was about five times faster than the DTW with all training samples, and more stable than the randomly selected templates.

Multi-Modal Instruction Recognition System using Speech and Gesture (음성 및 제스처를 이용한 멀티 모달 명령어 인식 시스템)

  • Kim, Jung-Hyun;Rho, Yong-Wan;Kwon, Hyung-Joon;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.57-62
    • /
    • 2006
  • 휴대용 단말기의 소형화 및 지능화와 더불어 차세대 PC 기반의 유비쿼터스 컴퓨팅에 대한 관심이 높아짐에 따라 최근에는 펜이나 음성 입력 멀티미디어 등 여러 가지 대화 모드를 구비한 멀티 모달 상호작용 (Multi-Modal Interaction MMI)에 대한 연구가 활발히 진행되고 있다. 따라서, 본 논문에서는 잡음 환경에서의 명확한 의사 전달 및 휴대용 단말기에서의 음성-제스처 통합 인식을 위한 인터페이스의 연구를 목적으로 Voice-XML과 Wearable Personal Station(WPS) 기반의 음성 및 내장형 수화 인식기를 통합한 멀티 모달 명령어 인식 시스템 (Multi-Modal Instruction Recognition System : MMIRS)을 제안하고 구현한다. 제안되어진 MMIRS는 한국 표준 수화 (The Korean Standard Sign Language : KSSL)에 상응하는 문장 및 단어 단위의 명령어 인식 모델에 대하여 음성뿐만 아니라 화자의 수화제스처 명령어를 함께 인식하고 사용함에 따라 잡음 환경에서도 규정된 명령어 모델에 대한 인식 성능의 향상을 기대할 수 있다. MMIRS의 인식 성능을 평가하기 위하여, 15인의 피험자가 62개의 문장형 인식 모델과 104개의 단어인식 모델에 대하여 음성과 수화 제스처를 연속적으로 표현하고, 이를 인식함에 있어 개별 명령어 인식기 및 MMIRS의 평균 인식율을 비교하고 분석하였으며 MMIRS는 문장형 명령어 인식모델에 대하여 잡음환경에서는 93.45%, 비잡음환경에서는 95.26%의 평균 인식율을 나타내었다.

  • PDF

Android's Mental Arithmetic application gesture based input development (제스처 입력 기반 안드로이드 암산 애플리케이션 개발)

  • Oh, Cheol-Chul;Hyun, Dong-Lim;Kim, Jong-Hoon
    • 한국정보교육학회:학술대회논문집
    • /
    • 2011.01a
    • /
    • pp.241-246
    • /
    • 2011
  • There are many discussions nowadays about utilizing smartphones to create a mobile computing educational environment. The purpose of this study is to develope an application which addresses the growing importance of mental arithmetic maps in lower elementary grades. Considering current theories on developmental characteristics for the target levels I decided that a gesture based input interface increase the users concentration and interest. Students using this application will learn and reinforce the basics of the addition, subtraction, multiplication, and division of natural numbers. By removing the limitations of time and space as afforded by the convenience of a smartphone and utilizing a gesture based input interface we can combine an application which increases users mental arithmetic speed and precision with the enjoyment of a game.

  • PDF

Design and Prototyping of Legacy Home Appliance Controlling System Using Wearable Devices (웨어러블 기기를 활용한 레거시 가전 기기 제어 시스템의 설계 및 구현)

  • Koo, Bonhyun;Choi, Lynn
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.555-560
    • /
    • 2015
  • In this paper, we analyzed the existing control methods of the legacy wearable-based CE devices and identified the requirements for improvements. In the conventional system, users waste their time configuring the initial network and registering their devices with the management server. To overcome these hurdles, we implemented the Easy-Setup framework for smart phones to personalized cloud devices.

Object Detection and Optical Character Recognition for Mobile-based Air Writing (모바일 기반 Air Writing을 위한 객체 탐지 및 광학 문자 인식 방법)

  • Kim, Tae-Il;Ko, Young-Jin;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.53-63
    • /
    • 2019
  • To provide a hand gesture interface through deep learning in mobile environments, research on the light-weighting of networks is essential for high recognition rates while at the same time preventing degradation of execution speed. This paper proposes a method of real-time recognition of written characters in the air using a finger on mobile devices through the light-weighting of deep-learning model. Based on the SSD (Single Shot Detector), which is an object detection model that utilizes MobileNet as a feature extractor, it detects index finger and generates a result text image by following fingertip path. Then, the image is sent to the server to recognize the characters based on the learned OCR model. To verify our method, 12 users tested 1,000 words using a GALAXY S10+ and recognized their finger with an average accuracy of 88.6%, indicating that recognized text was printed within 124 ms and could be used in real-time. Results of this research can be used to send simple text messages, memos, and air signatures using a finger in mobile environments.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.