• Title/Summary/Keyword: Touch Recognition Algorithm

Search Result 28, Processing Time 0.022 seconds

Development of Multi Card Touch based Interactive Arcade Game System (멀티 카드 터치기반 인터랙티브 아케이드 게임 시스템 구현)

  • Lee, Dong-Hoon;Jo, Jae-Ik;Yun, Tae-Soo
    • Journal of Korea Entertainment Industry Association
    • /
    • v.5 no.2
    • /
    • pp.87-95
    • /
    • 2011
  • Recently, the issue has been tangible game environment due to the various interactive interface developments. In this paper, we propose the multi card touch based interactive arcade system by using marker recognition interface and multi-touch interaction interface. For our system, the card's location and orientation information is recognized through DI-based recognition algorithm. In addition, the user's hand gesture tracking informations are provided by the various interaction metaphors. The system provides the user with a higher engagement offers a new experience. Therefore, our system will be used in the tangible arcade game machine.

Video event control system by recognition of depth touch (깊이 터치를 통한 영상 이벤트 제어 시스템)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.35-42
    • /
    • 2016
  • Various events of stop, playback, capture, and zoom-in/out in playing video is available in the monitor of a small size such as smart phones. However, if the size of the display increases, then the cost of the touch recognition is increased, thus provision of a touch event is not possible in practice. In this paper, we propose a video event control system that recognizes a touch inexpensively from the depth information, then provides the variety of events of the toggle, the pinch-in / out by the single or multi-touch. The proposed method finds a touch location and the touch path by the depth information from a depth camera, and determines the touch gesture type. This touch interface algorithm is implemented in a small single-board system, and can control the video event by sending the gesture information through the UART communication. Simulation results show that the proposed depth touch method can control the various events in a large screen.

Visual Multi-touch Input Device Using Vision Camera (비젼 카메라를 이용한 멀티 터치 입력 장치)

  • Seo, Hyo-Dong;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.718-723
    • /
    • 2011
  • In this paper, we propose a visual multi-touch air input device using vision cameras. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To do this, first, we propose an image processing algorithm based on the HSV color model and the labeling from obtained images. Also, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.

External Light Evasion Method for Large Multi-touch Screens

  • Park, Young-Jin;Lyu, Hong-Kun;Lee, Sang-Kook;Cho, Hui-Sup
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.226-233
    • /
    • 2014
  • This paper presents an external light evasion method that rectifies the problem of misrecognition due to external lighting. The fundamental concept underlying the proposed method involves recognition of the differences between two images and elimination of the desynchronized external light by synchronizing the image sensor and inner light source of the optical touch screen. A range of artificial indoor light sources and natural sunlight are assessed. The proposed system synchronizes with a Vertical Synchronization (VSYNC) signal and the light source drive signal of the image sensor. Therefore, it can display synchronized light of the acquired image through the image sensor and remove external light that is not from the light source. A subtraction operation is used to find the differences and the absolute value of the result is utilized; hence, the order is irrelevant. The resulting image, which displays only a touched blob on the touchscreen, was created after image processing for coordination recognition and was then supplied to a coordination extraction algorithm.

A Multi-Bible Application on an Android Platform Using a Word Tokenization and Recognition Algorithm (단어 구분 및 인식 알고리즘을 이용한 안드로이드 플랫폼 기반의 멀티 성경 애플리케이션)

  • Kang, Sung-Mo;Kang, Myeong-Su;Kim, Jong-Myon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.4
    • /
    • pp.215-221
    • /
    • 2011
  • Mobile phones, which were used for simply calling and sending text messages, have recently moved to application-oriented digital devices such as smart phones and tablet phones. The rapid increase of smart and tablet phones which can offer advanced ability and run a variety of applications based on Java requires various digital multimedia content activities. These days, there are more than 2.2 billions of Christians around the world. Among them, more than 300 millions of people live in Asian, and all of them have and read the bible. If there is an application for the bible which translates from English to their own languages, it could be very helpful. With this reason, this paper proposes a multi-bible application that supports various languages. To do this, we implemented an algorithm that recognize sentences in the bible as word by word. The algorithm is essentially composed of the following three functions: tokenizing sentences in the bible into word by word (word tokenization), recognizing words by using touch event (word recognition), and translating the selected words to the desired language. Consequently, the proposed multi-bible application supports language translation efficiently by touching words of sentences in the bible.

A Train Ticket Reservation Aid System Using Automated Call Routing Technology Based on Speech Recognition (음성인식을 이용한 자동 호 분류 철도 예약 시스템)

  • Shim Yu-Jin;Kim Jae-In;Koo Myung-Wan
    • MALSORI
    • /
    • no.52
    • /
    • pp.161-169
    • /
    • 2004
  • This paper describes the automated call routing for train ticket reservation aid system based on speech recognition. We focus on the task of automatically routing telephone calls based on user's fluently spoken response instead of touch tone menus in an interactive voice response system. Vector-based call routing algorithm is investigated and mapping table for key term is suggested. Korail database collected by KT is used for call routing experiment. We evaluate call-classification experiments for transcribed text from Korail database. In case of small training data, an average call routing error reduction rate of 14% is observed when mapping table is used.

  • PDF

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Designing a Mobile User Interface with Grip-Pattern Recognition (파지 형태 감지를 통한 휴대 단말용 사용자 인터페이스 개발)

  • Chang Wook;Kim Kee Eung;Lee Hyunjeong;Cho Joon Kee;Soh Byung Seok;Shim Jung Hyun;Yang Gyunghye;Cho Sung Jung;Park Joonah
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.245-248
    • /
    • 2005
  • This paper presents a novel user interface system which aims at easy controlling of mobile devices. The fundamental concept of the proposed interface is to launch an appropriate function of the device by sensing and recognizing the grip-pattern when the user tries to use the mobile device. To this end, we develop a prototype system which employs capacitive touch sensors covering the housing of the system and a recognition algorithm for offering the appropriate function which suitable for the sensed grip-pattern. The effectiveness and feasibility of the proposed method is evaluated through the test of recognition rate with the collected grip-pattern database.

  • PDF

Development of K-$Touch^{TM}$ API for kinesthetic/tactile haptic interaction (역/촉감 햅틱 상호작용을 위한 "K-$Touch^{TM}$" API 개발 - 햅틱(Haptic) 개발자 및 응용분야를 위한 소프트웨어 인터페이스 -)

  • Lee, Beom-Chan;Kim, Jong-Phil;Ryu, Je-Ha
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.1-8
    • /
    • 2006
  • This paper presents a development of new haptic API (Application Programming Interface) that is called K-$Touch^{TM}$ haptic API. It is designed in order to allow users to interact with objects by kinesthetic and tactile modalities through haptic interfaces. The K-$Touch^{TM}$ API would serve two different types of users: high level programmers who need an easy to use haptic API for creating haptic applications and researchers in the haptic filed who need to experiment or develop with new devices and new algorithms while not wanting to re-write all the required code from scratch. Since the graphic hardware based kinesthetic rendering algorithm implemented in the K-$Touch^{TM}$ API is different from any other conventional kinesthetic algorithms, this API can provide users with haptic interaction for various data representations such as 2D, 2.5D depth(height field), 3D polygon, and volume data. In addition, this API supports kinesthetic and tactile interaction simultaneously in order to allow users with realistic haptic interaction. With a wide range of applicative characteristics, therefore, it is expected that the proposed K-$Touch^{TM}$ haptic API will assists to have deeper recognition of the environments, and enhance a sense of immersion in environments. Moreover, it will be useful development toolkit to investigate new devices and algorithms in the haptic research field.

  • PDF

Automated Call Routing Call Center System Based on Speech Recognition (음성인식을 이용한 고객센터 자동 호 분류 시스템)

  • Shim, Yu-Jin;Kim, Jae-In;Koo, Myung-Wan
    • Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.183-191
    • /
    • 2005
  • This paper describes the automated call routing for call center system based on speech recognition. We focus on the task of automatically routing telephone calls based on a users fluently spoken response instead of touch tone menus in an interactive voice response system. Vector based call routing algorithm is investigated and normalization method suggested. Call center database which was collected by KT is used for call routing experiment. Experimental results evaluating call-classification from transcribed speech are reported for that database. In case of small training data, an average call routing error reduction rate of 9% is observed when normalization method is used.

  • PDF