• Title/Summary/Keyword: gesture tracking

Search Result 110, Processing Time 0.024 seconds

The Effects of Emotional Interaction with Virtual Student on the User's Eye-fixation and Virtual Presence in the Teaching Simulation (가상현실 수업시뮬레이션에서 가상학생과의 정서적 상호작용이 사용자의 시선응시 및 가상실재감에 미치는 영향)

  • Ryu, Jeeheon;Kim, Kukhyeon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.581-593
    • /
    • 2020
  • The purpose of this study was to examine the eye-fixation times on different parts of a student avatar and the virtual presence with two scenarios in the virtual reality-based teaching simulation. This study was to identify user attention while he or she is interacting with a student avatar. By examining where a user is gazing during a conversation with the avatar, we have a better understanding of non-verbal communication. For this study, forty-five college students (21 females and 24 males) participated in the experiment. They had a conversation with a student avatar in a virtual reality-based teaching simulation. The participants had verbal interactions with the student avatar with two scenarios. While they were having a conversation with the virtual character in the teaching simulation, their eye-movements were collected through a head-mounted display with an eye-tracking function embedded. The results revealed that there were significant differences in eye-fixation times. Participants gazed a longer time on facial expression than any other area. The fixation time on the facial expression was more prolonged than on gestures (F=3.75, p<.05). However, the virtual presence was not significantly different in two scenario levels. This result suggested that users focus on the face more than the gesture when they emotionally interact with the virtual character.

User Detection and Main Body Parts Estimation using Inaccurate Depth Information and 2D Motion Information (정밀하지 않은 깊이정보와 2D움직임 정보를 이용한 사용자 검출과 주요 신체부위 추정)

  • Lee, Jae-Won;Hong, Sung-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.611-624
    • /
    • 2012
  • 'Gesture' is the most intuitive means of communication except the voice. Therefore, there are many researches for method that controls computer using gesture input to replace the keyboard or mouse. In these researches, the method of user detection and main body parts estimation is one of the very important process. in this paper, we propose user objects detection and main body parts estimation method on inaccurate depth information for pose estimation. we present user detection method using 2D and 3D depth information, so this method robust to changes in lighting and noise and 2D signal processing 1D signals, so mainly suitable for real-time and using the previous object information, so more accurate and robust. Also, we present main body parts estimation method using 2D contour information, 3D depth information, and tracking. The result of an experiment, proposed user detection method is more robust than only using 2D information method and exactly detect object on inaccurate depth information. Also, proposed main body parts estimation method overcome the disadvantage that can't detect main body parts in occlusion area only using 2D contour information and sensitive to changes in illumination or environment using color information.

Design and Implementation of a Sign Language Gesture Recognizer using Data Glove and Motion Tracking System (장갑 장치와 제스처 추적을 이용한 수화 제스처 인식기의 실계 및 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Kim, Dong-Gyu;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.233-237
    • /
    • 2005
  • 수화의 인식 및 표현 기술에 대한 관련 연구는 수화 인식을 통한 건청인과의 의사 전달, 가상현실에서의 손동작 인식 등을 대상으로 여러 방면으로의 접근 및 연구 결과를 도출하고 있다. 그러나 이들 연구의 대부분 데스크탑 PC기반의 수신호(Hand signal) 제어 및 수화 - 손 동작 인식에 목적을 두었고 수화 신호의 획득을 위하여 영상장비를 이용하였으며 이를 바탕으로 단어 위주의 수화 인식 및 표현에 중점을 둔 수화 인식 시스템의 구현을 통해 비장애인과의 자유로운 의사소통을 추구하고 있다. 따라서 본 논문에서는 햅틱 장치로부터 사용자의 의미있는 수화 제스처를 획득하기 위한 접근 방식을 차세대 착용형 PC 플랫폼 기반의 유비쿼터스 환경으로 확대, 적용시켜 제스처 데이터 입력 모듈로부터 새로운 정보의 획득에 있어 한계성을 극복하고 사용자의 편의를 도모할 수 있는 효율적인 데이터 획득 방안을 제시한다. 또한 퍼지 알고리즘 및 RDBMS 모듈을 이용하여 언제, 어디에서나 사용자의 의미 있는 문장형 수화 제스처를 실시간으로 인식하고 표현하는 수화 제스처 인식기를 구현하였다. 본 논문에서는 수화 제스처 입력 모듈(5th Data Glove System과 $Fastrak{\circledR}$)과 차세대 착용형 PC 플랫폼(embedded I.MX21 board)간의 이격거리를 반경 10M의 타원 형태로 구성하고 규정된 위치로 수화 제스처 데이터 입력모듈을 이동시키면서 5인의 피실험자에 대하여 연속적으로 20회의 반복 실험을 수행하였으며 사용자의 동적 제스처 인식 실험결과 92.2% 평균 인식률을 도출하였다.

  • PDF

Word-boundary and rate effects on upper and lower lip movements in the articulation of the bilabial stop /p/ in Korean

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.23-31
    • /
    • 2018
  • In this study, we examined how the upper and lower lips articulate to produce labial /p/. Using electromagnetic midsagittal articulography, we collected flesh-point tracking movement data from eight native speakers of Seoul Korean (five females and three males). Individual articulatory movements in /p/ were examined in terms of minimum vertical upper lip position, maximum vertical lower lip position, and corresponding vertical upper lip position aligned with maximum vertical lower lip position. Using linear mixed-effect models, we tested two factors (word boundary [across-word vs. within-word] and speech rate [comfortable vs. fast]) and their interaction, considering subjects as random effects. The results are summarized as follows. First, maximum lower lip position varied with different word boundaries and speech rates, but no interaction was detected. In particular, maximum lower lip position was lower (e.g., less constricted or more reduced) in fast rate condition and across-word boundary condition. Second, minimum lower lip position, as well as lower lip position, measured at the time of maximum lower lip position only varied with different word boundaries, showing that they were consistently lower in across-word condition. We provide further empirical evidence of lower lip movement sensitive to both different word boundaries (e.g., linguistic factor) and speech rates (e.g., paralinguistic factor); this supports the traditional idea that the lower lip is an actively moving articulator. The sensitivity of upper lip movement is also observed with different word boundaries; this counters the traditional idea that the upper lip is the target area, which presupposes immobility. Taken together, the lip aperture gesture is a good indicator that takes into account upper and lower lip vertical movements, compared to the traditional approach that distinguishes a movable articulator from target place. Respective of different speech rates, the results of the present study patterned with cross-linguistic lenition-related allophonic variation, which is known to be more sensitive to fast rate.

Infrared LED Pointer for Interactions in Collaborative Environments (협업 환경에서의 인터랙션을 위한 적외선 LED 포인터)

  • Jin, Yoon-Suk;Lee, Kyu-Hwa;Park, Jun
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.1
    • /
    • pp.57-63
    • /
    • 2007
  • Our research was performed in order to implement a new pointing device for human-computer interactions in a collaborative environments based on Tiled Display system. We mainly focused on tracking the position of an infrared light source and applying our system to various areas. More than simple functionality of mouse clicking and pointing, we developed a device that could be used to help people communicate better with the computer. The strong point of our system is that it could be implemented in any place where a camera can be installed. Due to the fact that this system processes only infrared light, computational overhead for LED recognition was very low. Furthermore, by analyzing user's movement, various actions are expected to be performed with more convenience. This system was tested for presentation and game control.

  • PDF

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

The Effect of Teacher Participation-Oriented Education Program Centered on Multi-Faceted Analysis of Elementary Science Classes on the Class Expertise of Novice Teacher (초등 과학수업의 다면적 분석을 중심으로 한 교사 참여형 교육프로그램이 초보교사의 수업전문성에 미치는 효과)

  • Shin, Won-Sub;Shin, Dong-Hoon
    • Journal of Korean Elementary Science Education
    • /
    • v.38 no.3
    • /
    • pp.406-425
    • /
    • 2019
  • The purpose of this study is to analyze The Effect of Teacher Participation-oriented Education Program (TPEP) centered on Multi-Faceted Analysis of Elementary Science Classes on the Class Expertise of novice teacher. First, in order to develop the TPEP, lectures and exploratory science classes were analyzed using imaging and eye-tracking techniques. In this study, the TPEP was developed in five stages: image analysis, eye analysis, teaching language analysis, gesture analysis, and class development. Participants directly analyzed the classes of experienced and novice teachers at each stage. The TPEP developed in this study is different from the existing teacher education program in that it reflected the human performance technology aspects. The participants analyzed actual elementary science classes in a multi-faceted way and developed better classes based on them. The results of this study are as follows. First, at the teacher training institutions and the school sites, pre-service teachers and novice teachers should be provided with various experiences in class analysis and multi-faceted analysis of their own classes. Second, through this study, we were able to identify the limitations of existing class observations and video analysis. Third, the TPEP should be developed to improve the novice teachers' class expertise. Finally, we hope that the results of this study are used as basic data in developing programs to improve teachers' class expertise in teacher training institutions and education policy institutions.

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

Image Processing Algorithms for DI-method Multi Touch Screen Controllers (DI 방식의 대형 멀티터치스크린을 위한 영상처리 알고리즘 설계)

  • Kang, Min-Gu;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.1-12
    • /
    • 2011
  • Large-sized multi-touch screen is usually made using infrared rays. That is because it has technical constraints or cost problems to make the screen with the other ways using such as existing resistive overlays, capacitive overlay, or acoustic wave. Using infrared rays to make multi-touch screen is easy, but is likely to have technical limits to be implemented. To make up for these technical problems, two other methods were suggested through Surface project, which is a next generation user-interface concept of Microsoft. One is Frustrated Total Internal Reflection (FTIR) which uses infrared cameras, the other is Diffuse Illumination (DI). FTIR and DI are easy to be implemented in large screens and are not influenced by the number of touch points. Although FTIR method has an advantage in detecting touch-points, it also has lots of disadvantages such as screen size limit, quality of the materials, the module for infrared LED arrays, and high consuming power. On the other hand, DI method has difficulty in detecting touch-points because of it's structural problems but makes it possible to solve the problem of FTIR. In this thesis, we study the algorithms for effectively correcting the distort phenomenon of optical lens, and image processing algorithms in order to solve the touch detecting problem of the original DI method. Moreover, we suggest calibration algorithms for improving the accuracy of multi-touch, and a new tracking technique for accurate movement and gesture of the touch device. To verify our approaches, we implemented a table-based multi touch screen.

Experience Design Guideline for Smart Car Interface (스마트카의 인터페이스를 위한 경험 디자인 가이드라인)

  • Yoo, Hoon Sik;Ju, Da Young
    • Design Convergence Study
    • /
    • v.15 no.1
    • /
    • pp.135-150
    • /
    • 2016
  • Due to the development of communication technology and expansion of Intelligent Transport System (ITS), the car is changing from a simple mechanical device to second living space which has comprehensive convenience function and is evolved into the platform which is playing as an interface for this role. As the interface area to provide various information to the passenger is being expanded, the research importance about smart car based user experience is rising. This study has a research objective to propose the guidelines regarding the smart car user experience elements. In order to conduct this study, smart car user experience elements were defined as function, interaction, and surface and through the discussions of UX/UI experts, 8 representative techniques, 14 representative techniques, and 8 locations of the glass windows were specified for each element. Following, the smart car users' priorities of the experience elements, which were defined through targeting 100 drivers, were analyzed in the form of questionnaire survey. The analysis showed that the users' priorities in applying the main techniques were in the order of safety, distance, and sensibility. The priorities of the production method were in the order of voice recognition, touch, gesture, physical button, and eye tracking. Furthermore, regarding the glass window locations, users prioritized the front of the driver's seat to the back. According to the demographic analysis on gender, there were no significant differences except for two functions. Therefore this showed that the guidelines of male and female can be commonly applied. Through user requirement analysis about individual elements, this study provides the guides about the requirement in each element to be applied to commercialized product with priority.