• Title/Summary/Keyword: Interaction Gesture

Search Result 226, Processing Time 0.028 seconds

States, Behaviors and Cues of Infants (영아의 상태, 행동, 암시)

  • Kim, Tae-Im
    • Korean Parent-Child Health Journal
    • /
    • v.1
    • /
    • pp.56-74
    • /
    • 1998
  • The language of the newborn, like that of adults, is one of gesture, posture, and expression(Lewis, 1980). Helping parents understand and respond to their newborn's cues will make caring for their baby more enjoyable and may well provide the foundation for a communicative bond that will last lifetime. Infant state provides a dynamic pattern reflecting the full behavioral repertoire of the healthy infant(Brazelton, 1973, 1984). States are organized in a predictable emporal sequence and provide a basic classification of conditions that occur over and over again(Wolff, 1987). They are recognized by characteristic behavioral patterns, physiological changes, and infants' level of responsiveness. Most inportantly, however, states provide caregivers a framework for observing and understanding infants' behavior. When parents know how to determine whether their infant is sleep, awake, or drowsy, and they know the implications, recognition of states has for both the infant's behavior and for their caregiving, then a lot of hings about taking care of a newborn become much easier and more rewarding. Most parents have the skills and desire to do what is best for their infant. The skills 7373parents bring to the interaction are: the ability to read their infant's cues: to stimulate the baby through touch, movement, talking, and looking at: and to respond in a contingent manner to the infant's signals. Among the crucial skills infants bring to the interaction are perceptual abilities: hearing and seeing, the capacity to look at another for a period of time, the ability to smile, be consoled, adapt their body to holding or movement, and be regular and predictable in responding. Research demonstrates that the absence of these skills by either partner adversely affects parent-infant interaction and later development. Observing early parent-infant interactions during the hospital stay is important in order to identify parent-infant pairs in need of continued monitoring(Barnard, et al., 1989).

  • PDF

Point Cloud Content in Form of Interactive Holograms (포인트 클라우드 형태의 인터랙티브 홀로그램 콘텐츠)

  • Kim, Dong-Hyun;Kim, Sang-Wook
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.9
    • /
    • pp.40-47
    • /
    • 2012
  • Existing art, media art, accompanied by a new path of awareness and perception instrumentalized by the human body, creating a new way to watch the interaction is proposed. Western art way to create visual images of the point cloud that represented a form that is similar to the Pointage. This traditional painting techniques using digital technology means reconfiguration. In this paper, a new appreciation of fusion of aesthetic elements and digital technology, making the point cloud in the form of video. And this holographic film projection of the spectator, and gestures to interact with the video content is presented. A Process of making contents is intent planning, content creation, content production point cloud in the form of image, 3D gestures for interaction design process, go through the process of holographic film projection. Visual and experiential content of memory recall process takes place in the consciousness of the people expressed. Complete the process of memory recall, uncertain memories, memories materialized, recalled. Uncertain remember the vague shapes of the point cloud in the form of an image represented by the image. As embodied memories through the act of interaction to manipulate images recall is complete.

A Study on Comparative Experiment of Hand-based Interface in Immersive Virtua Reality (몰입형 가상현실에서 손 기반 인터페이스의 비교 실험에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.1-9
    • /
    • 2019
  • This study compares hand-based interfaces to improve a user's virtual reality (VR) presence by enhancing user immersion in VR interactions. To provide an immersive experience, in which users can more directly control the virtual environment and objects within that environment using their hands and, to simultaneously minimize the device burden on users using immersive VR systems, we designed two experimental interfaces (hand motion recognition sensor- and controller-based interactions). Hand motion recognition sensor-based interaction reflects accurate hand movements, direct gestures, and motion representations in the virtual environment, and it does not require using a device in addition to the VR head mounted display (HMD). Controller-based interaction designs a generalized interface that maps the gesture to the controller's key for easy access to the controller provided with the VR HMD. The comparative experiments in this study confirm the convenience and intuitiveness of VR interactions using the user's hand.

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

Exploring the Effects of Passive Haptic Factors When Interacting with a Virtual Pet in Immersive VR Environment (몰입형 VR 환경에서 가상 반려동물과 상호작용에 관한 패시브 햅틱 요소의 영향 분석)

  • Donggeun KIM;Dongsik Jo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.125-132
    • /
    • 2024
  • Recently, with immersive virtual reality(IVR) technologies, various services such as education, training, entertainment, industry, healthcare and remote collaboration have been applied. In particular, researches are actively being studied to visualize and interact with virtual humans, research on virtual pets in IVR is also emerging. For interaction with the virtual pet, similar to real-world interaction scenarios, the most important thing is to provide physical contact such as haptic and non-verbal interaction(e.g., gesture). This paper investigates the effects on factors (e.g., shape and texture) of passive haptic feedbacks using mapping physical props corresponding to the virtual pet. Experimental results show significant differences in terms of immersion, co-presence, realism, and friendliness depending on the levels of texture elements when interacting with virtual pets by passive haptic feedback. Additionally, as the main findings of this study by statistical interaction between two variables, we found that there was Uncanny valley effect in terms of friendliness. With our results, we will expect to be able to provide guidelines for creating interactive contents with the virtual pet in immersive VR environments.

Effects of interactivity and usage mode on user satisfaction, usefulness, and intetion to use in text information presentation in mobile environment (모바일 환경에서의 텍스트 표현 방식의 상호작용성과 사용모드가 사용자의 만족도, 유용성, 사용의도에 미치는 영향)

  • Baek, Hyunji;Lee, Sangwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.223-226
    • /
    • 2017
  • With the development of information technology, users are provided with information they want through mobile device in various situations. When users communicate with information, there is interaction through gesture activities such as tapping and experience in the process. Experience through interaction in mobile affect the user's psychology. This is important because it is related to the behavior of the user in the future. Various types of information presentation methods have been researched in mobile environment. However, there are more research focusing on functional interactivity. The purpose of this study is to investigate the effect of interaction and usage mode on satisfaction, usefulness, and intention to use for sound text presentation that is user-centered. As variables for my study, there are two factors which are interactivity and usage mode. The interactivity type is composed of two ways that are: High and Low depending on modality and message interactivity; and the usage mode is composed of Action mode and Goal mode depending on whether user has a task or not. The experimental design is $2{\times}2$. The same content is provided in (a) only Modality interactivity, and (b) Modality and Message interactivity are provided. Depending on usage mode, (a) Action mode is processed without a specific task, and (b) Goal mode is performed with a specific task to participants. The experimental study demonstrated that there is a difference in satisfaction, usefulness, and intention to use depending on the difference of interaction and usage mode when providing information in mobile environment. The results of this study are summarized as follows: interaction and usage mode have significant influence on mobile user's satisfaction, usefulness, intention to use.

  • PDF

Word-boundary and rate effects on upper and lower lip movements in the articulation of the bilabial stop /p/ in Korean

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.23-31
    • /
    • 2018
  • In this study, we examined how the upper and lower lips articulate to produce labial /p/. Using electromagnetic midsagittal articulography, we collected flesh-point tracking movement data from eight native speakers of Seoul Korean (five females and three males). Individual articulatory movements in /p/ were examined in terms of minimum vertical upper lip position, maximum vertical lower lip position, and corresponding vertical upper lip position aligned with maximum vertical lower lip position. Using linear mixed-effect models, we tested two factors (word boundary [across-word vs. within-word] and speech rate [comfortable vs. fast]) and their interaction, considering subjects as random effects. The results are summarized as follows. First, maximum lower lip position varied with different word boundaries and speech rates, but no interaction was detected. In particular, maximum lower lip position was lower (e.g., less constricted or more reduced) in fast rate condition and across-word boundary condition. Second, minimum lower lip position, as well as lower lip position, measured at the time of maximum lower lip position only varied with different word boundaries, showing that they were consistently lower in across-word condition. We provide further empirical evidence of lower lip movement sensitive to both different word boundaries (e.g., linguistic factor) and speech rates (e.g., paralinguistic factor); this supports the traditional idea that the lower lip is an actively moving articulator. The sensitivity of upper lip movement is also observed with different word boundaries; this counters the traditional idea that the upper lip is the target area, which presupposes immobility. Taken together, the lip aperture gesture is a good indicator that takes into account upper and lower lip vertical movements, compared to the traditional approach that distinguishes a movable articulator from target place. Respective of different speech rates, the results of the present study patterned with cross-linguistic lenition-related allophonic variation, which is known to be more sensitive to fast rate.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF