• Title/Summary/Keyword: $\pi$-gesture

Search Result 7, Processing Time 0.022 seconds

A Real Time Low-Cost Hand Gesture Control System for Interaction with Mechanical Device (기계 장치와의 상호작용을 위한 실시간 저비용 손동작 제어 시스템)

  • Hwang, Tae-Hoon;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1423-1429
    • /
    • 2019
  • Recently, a system that supports efficient interaction, a human machine interface (HMI), has become a hot topic. In this paper, we propose a new real time low-cost hand gesture control system as one of vehicle interaction methods. In order to reduce computation time, depth information was acquired using a time-of-flight (TOF) camera because it requires a large amount of computation when detecting hand regions using an RGB camera. In addition, fourier descriptor were used to reduce the learning model. Since the Fourier descriptor uses only a small number of points in the whole image, it is possible to miniaturize the learning model. In order to evaluate the performance of the proposed technique, we compared the speeds of desktop and raspberry pi2. Experimental results show that performance difference between small embedded and desktop is not significant. In the gesture recognition experiment, the recognition rate of 95.16% is confirmed.

Prosodic Boundary Effects on the V-to-V Lingual Movement in Korean

  • Cho, Tae-Hong;Yoon, Yeo-Min;Kim, Sa-Hyang
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.101-113
    • /
    • 2010
  • The present study investigated how the kinematics of the /a/-to-/i/ tongue movement in Korean would be influenced by prosodic boundary. The /a/-to-/i/ sequence was used as 'transboundary' test materials which occurred across a prosodic boundary as in /ilnjəʃ$^h$a/ # / minsakwae/ ('일년차#민사과에' 'the first year worker' # 'dept. of civil affairs'). It also tested whether the V-to-V tongue movement would be further influenced by its syllable structure with /m/ which was placed either in the coda condition (/am#i/) or in the onset condition (/a#mi). Results of an EMA (Electromagnetic Articulagraphy) study showed that kinematical parameters such as the movement distance (displacement), the movement duration, and the movement velocity (speed) all varied as a function of the boundary strength, showing an articulatory strengthening pattern of a "larger, longer and faster" movement. Interestingly, however, the larger, longer and faster pattern associated with boundary marking in Korean has often been observed with stress (prominence) marking in English. It was proposed that language-specific prosodic systems induce different ways in which phonetics and prosody interact: Korean, as a language without lexical stress and pitch accent, has more degree of freedom to express prosodic strengthening, while languages such as English have constraints, so that some strengthening patterns are reserved for lexical stress. The V-to-V tongue movement was also found to be influenced by the intervening consonant /m/'s syllable affiliation, showing a more preboundary lengthening of the tongue movement when /m/ was part of the preboundary syllable (/am#i/). The results, together, show that the fine-grained phonetic details do not simply arise as low-level physical phenomena, but reflect higher-level linguistic structures, such as syllable and prosodic structures. It was also discussed how the boundary-induced kinematic patterns could be accounted for in terms of the task dynamic model and the theory of the prosodic gesture ($\pi$-gesture).

  • PDF

Design of OpenCV based Finger Recognition System using binary processing and histogram graph

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.17-23
    • /
    • 2016
  • NUI is a motion interface. It uses the body of the user without the use of HID device such as a mouse and keyboard to control the device. In this paper, we use a Pi Camera and sensors connected to it with small embedded board Raspberry Pi. We are using the OpenCV algorithms optimized for image recognition and computer vision compared with traditional HID equipment and to implement a more human-friendly and intuitive interface NUI devices. comparison operation detects motion, it proposed a more advanced motion sensors and recognition systems fused connected to the Raspberry Pi.

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.

A Study on the Motion and Voice Recognition Smart Mirror Using Grove Gesture Sensor (그로브 제스처 센서를 활용한 모션 및 음성 인식 스마트 미러에 관한 연구)

  • Hui-Tae Choi;Chang-Hoon Go;Ji-Min Jeong;Ye-Seul Shin;Hyoung-Keun Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1313-1320
    • /
    • 2023
  • This paper presents the development of a smart mirror that allows control of its display through glove gestures and integrates voice recognition functionality. The hardware configuration of the smart mirror consists of an LCD monitor combined with an acrylic panel, onto which a semi-mirror film with a reflectance of 37% and transmittance of 36% is attached, enabling it to function as both a mirror and a display. The proposed smart mirror eliminates the need for users to physically touch the mirror or operate a keyboard, as it implements gesture control through glove gesture sensors. Additionally, it incorporates voice recognition capabilities and integrates Google Assistant to display results on the screen corresponding to voice commands issued by the user.

Design and Implementation of Finger Language Translation System using Raspberry Pi and Leap Motion (라즈베리 파이와 립 모션을 이용한 지화 번역 시스템 설계 및 구현)

  • Jeong, Pil-Seong;Cho, Yang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2006-2013
    • /
    • 2015
  • Deaf are it is difficult to communicate to represent the voice heard, so theay use mostly using the speech, sign language, writing, etc. to communicate. It is the best way to use sign language, in order to communicate deaf and normal people each other. But they must understand to use sign language. In this paper, we designed and implementated finger language translation system to support communicate between deaf and normal people. We used leap motion as input device that can track finger and hand gesture. We used raspberry pi that is low power sing board computer to process input data and translate finger language. We implemented application used Node.js and MongoDB. The client application complied with HTML5 so that can be support any smart device with web browser.

System implementation share of voice and sign language (지화인식 기반의 음성 및 SNS 공유 시스템 구현)

  • Kang, Jung-Hun;Yang, Dea-Sik;Oh, Min-Seok;Sir, Jung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.644-646
    • /
    • 2016
  • Deaf are it is difficult to communicate to represent the voice heard, so theay use mostly using the speech, sign language, writing, etc. to communicate. It is the best way to use sign language, in order to communicate deaf and normal people each other. But they must understand to use sign language. In this paper, we designed and implementated finger language translation system to support communicate between deaf and normal people. We used leap motion as input device that can track finger and hand gesture. We used raspberry pi that is low power sing board computer to process input data and translate finger language. We implemented application used Node.js and MongoDB. The client application complied with HTML5 so that can be support any smart device with web browser.

  • PDF