• Title/Summary/Keyword: gesture-recognition

Search Result 558, Processing Time 0.024 seconds

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.

Optimization Technique to recognize Hand Motion of Wrist Rehabilitation using Neural Network (신경망을 활용한 손목재활 수부 동작 인식 최적화 기법)

  • Lee, Su-Hyeon;Lee, Young-Keun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.117-124
    • /
    • 2021
  • This study is a study to recognize hand movements using a neural network for wrist rehabilitation. The rehabilitation of the hand aims to restore the function of the injured hand to the maximum and enable daily life, occupation, and hobby. It is common for a physical therapist, an occupational therapist, and a security tool maker to form a team and approach a doctor for a hand rehabilitation. However, it is very inefficient economically and temporally to find a place for treatment. In order to solve this problem, in this study, patients directly use smart devices to perform rehabilitation treatment. Using this will be very helpful in terms of cost and time. In this study, a wrist rehabilitation dataset was created by collecting data on 4 types of rehabilitation exercises from 10 persons. Hand gesture recognition was constructed using a neural network. As a result, the accuracy of 93% was obtained, and the usefulness of this system was verified.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Hand Gesture Recognition Regardless of Sensor Misplacement for Circular EMG Sensor Array System (원형 근전도 센서 어레이 시스템의 센서 틀어짐에 강인한 손 제스쳐 인식)

  • Joo, SeongSoo;Park, HoonKi;Kim, InYoung;Lee, JongShill
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.4
    • /
    • pp.371-376
    • /
    • 2017
  • In this paper, we propose an algorithm that can recognize the pattern regardless of the sensor position when performing EMG pattern recognition using circular EMG system equipment. Fourteen features were extracted by using the data obtained by measuring the eight channel EMG signals of six motions for 1 second. In addition, 112 features extracted from 8 channels were analyzed to perform principal component analysis, and only the data with high influence was cut out to 8 input signals. All experiments were performed using k-NN classifier and data was verified using 5-fold cross validation. When learning data in machine learning, the results vary greatly depending on what data is learned. EMG Accuracy of 99.3% was confirmed when using the learning data used in the previous studies. However, even if the position of the sensor was changed by only 22.5 degrees, it was clearly dropped to 67.28% accuracy. The accuracy of the proposed method is 98% and the accuracy of the proposed method is about 98% even if the sensor position is changed. Using these results, it is expected that the convenience of the users using the circular EMG system can be greatly increased.

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.

Experience Design Guideline for Smart Car Interface (스마트카의 인터페이스를 위한 경험 디자인 가이드라인)

  • Yoo, Hoon Sik;Ju, Da Young
    • Design Convergence Study
    • /
    • v.15 no.1
    • /
    • pp.135-150
    • /
    • 2016
  • Due to the development of communication technology and expansion of Intelligent Transport System (ITS), the car is changing from a simple mechanical device to second living space which has comprehensive convenience function and is evolved into the platform which is playing as an interface for this role. As the interface area to provide various information to the passenger is being expanded, the research importance about smart car based user experience is rising. This study has a research objective to propose the guidelines regarding the smart car user experience elements. In order to conduct this study, smart car user experience elements were defined as function, interaction, and surface and through the discussions of UX/UI experts, 8 representative techniques, 14 representative techniques, and 8 locations of the glass windows were specified for each element. Following, the smart car users' priorities of the experience elements, which were defined through targeting 100 drivers, were analyzed in the form of questionnaire survey. The analysis showed that the users' priorities in applying the main techniques were in the order of safety, distance, and sensibility. The priorities of the production method were in the order of voice recognition, touch, gesture, physical button, and eye tracking. Furthermore, regarding the glass window locations, users prioritized the front of the driver's seat to the back. According to the demographic analysis on gender, there were no significant differences except for two functions. Therefore this showed that the guidelines of male and female can be commonly applied. Through user requirement analysis about individual elements, this study provides the guides about the requirement in each element to be applied to commercialized product with priority.