• Title/Summary/Keyword: Touch interaction

Search Result 170, Processing Time 0.036 seconds

A Tangible Floating Display System for Interaction

  • Kim, Youngmin;Kang, Hoonjong;Ahn, Yangkeun;Choi, Kwang-Soon;Park, Byoungha;Hong, Sunghee;Jung, Kwang-Mo
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.32-36
    • /
    • 2014
  • A tangible floating display that can provide different perspective views without special glasses being introduced. The proposed system can display perspective floating images in the space in front of the system with the help of concave mirrors. In order to avoid wearing special equipment to interact and deliver the sense of touch, the proposed system adopted an ultrasound focusing technology. For providing an immersive experience to the viewers, the proposed system consists of a tangible floating display system and a multiple-view imaging system for generating three lenticular displays in front of the users.

Remote Touch Interaction System for Intelligent Office Control (지능형 오피스 환경 제어를 위한 원격 터치 인터렉션 시스템)

  • Bae, Ki-Tae;Nam, Byeong-Cheol
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.05a
    • /
    • pp.221-224
    • /
    • 2010
  • 본 논문에서는 레이저 포인터를 이용하여 지능형 오피스 환경을 자유롭게 제어 할 수 있는 지능형 정보처리 시스템을 제안한다. 저가의 웹캠을 이용하여 카메라로부터 입력된 레이저 포인터 스팟의 위치를 자동 검출한 후 검출된 위치좌표정보를 컴퓨터상의 제어 명령 이벤트와 매칭 시킨다. 매칭이 이루어 진 후 사용자는 레이저 포인터를 이동시키면서 보조자의 도움 없이 자유롭게 디스플레이 화면을 제어할 수 있다. 사용자는 화면상의 임의의 영역을 특정 명령영역으로 지정할 수 있는 가상 키패드 인터페이스를 통해 특정 프로그램이나 가전 기기를 제어할 수 있다. 실험 결과를 통해 제안한 시스템이 기존의 원격 제어 방법에 비해 가격이나 성능 면에서 뛰어난 결과를 보임을 확인할 수 있다.

  • PDF

Universal Multi-Touch Interaction System for Multi-Input Content Creation Environment (다중 입출력 컨텐츠 제작 환경을 위한 범용 멀티 터치 인터렉션 시스템)

  • Nam, Byeong-Cheol;Na, In-Sic;Bae, Ki-Tae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06b
    • /
    • pp.216-221
    • /
    • 2010
  • 초기 전자악기를 만드는 사람들은 자신들이 직접 만든 터치 장치를 사운드 조절에 이용하였다. 1982년에 첫 멀티터치 시스템이 개발되었고 21세기 초까지 많은 연구와 멀티터치 시스템의 성능 향상에도 불구하고 대중화의 길은 요원해 보였다. 그러나 2007년 애플의 아이폰을 시작으로 멀티터치 인터페이스는 큰 인기를 끌게 되었고 스마트폰을 통해서 대중들이 멀티터치 인터페이스의 특징을 보다 쉽게 체험 할 수 있게 되었다. 이러한 흐름 속에서 멀티터치 인터페이스는 자연스럽게 사람들의 주목을 받으며 다양한 가능성을 보이고 있지만 관련 콘텐츠의 부재와 개발 인프라의 부족으로 인해 대중화의 길로 다가서기에는 여전히 부족한 현실이다. 본 논문에서는 현재의 멀티터치 인터페이스의 문제점을 지적하고 멀티터치 인터페이스 시장의 활성화와 멀티 터치 관련 콘텐츠 제작자를 위한 범용 멀티터치 인터랙션 시스템을 제안한다. 실험을 통해 제안한 시스템의 효율성을 입증하고 나아가 멀티터치 콘텐츠 시장이 발전할 수 있는 방안을 제시한다.

  • PDF

Implementation of the Multi-Gestures Air-Space Mouse using the Diffused Illumination Method. (확산 투광방식을 이용한 멀티-제스처 공간 마우스 구현)

  • Lee, Sung-Jae;Lee, Woo-Beom
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.353-354
    • /
    • 2009
  • 최근 사용자의 다중 입력이 가능한 멀티 터치(Multi-Touch) 기술은 HCI(Human Computer Interaction) 분야에서 가장 주목 받고 있는 기술 가운데 하나이다. 그러나 이러한 터치 기술은 입력 장치에 의존적이기 때문에 공간적 제약의 문제를 지니고 있다. 따라서 본 논문에서는 멀티 터치 디스플레이 장치에서 사용자 입력 처리를 위해서 사용되는 확산 투광(DI: Diffused illumination)방식을 이용하여 사용자 입력에 있어서 공간적 제약이 없는 공간 마우스를 구현한다. 제안하는 공간 마우스는 기본적인 마우스 이벤트 처리뿐만 아니라 확장된 사용자 멀티 제스쳐를 위한 성능 향상 방안을 제시한다. 구현된 공간 마우스는 윈도우 어플리케이션 사용 환경에 적용한 결과 성공적인 결과를 보였다.

States, Behaviors and Cues of Infants (영아의 상태, 행동, 암시)

  • Kim, Tae-Im
    • Korean Parent-Child Health Journal
    • /
    • v.1
    • /
    • pp.56-74
    • /
    • 1998
  • The language of the newborn, like that of adults, is one of gesture, posture, and expression(Lewis, 1980). Helping parents understand and respond to their newborn's cues will make caring for their baby more enjoyable and may well provide the foundation for a communicative bond that will last lifetime. Infant state provides a dynamic pattern reflecting the full behavioral repertoire of the healthy infant(Brazelton, 1973, 1984). States are organized in a predictable emporal sequence and provide a basic classification of conditions that occur over and over again(Wolff, 1987). They are recognized by characteristic behavioral patterns, physiological changes, and infants' level of responsiveness. Most inportantly, however, states provide caregivers a framework for observing and understanding infants' behavior. When parents know how to determine whether their infant is sleep, awake, or drowsy, and they know the implications, recognition of states has for both the infant's behavior and for their caregiving, then a lot of hings about taking care of a newborn become much easier and more rewarding. Most parents have the skills and desire to do what is best for their infant. The skills 7373parents bring to the interaction are: the ability to read their infant's cues: to stimulate the baby through touch, movement, talking, and looking at: and to respond in a contingent manner to the infant's signals. Among the crucial skills infants bring to the interaction are perceptual abilities: hearing and seeing, the capacity to look at another for a period of time, the ability to smile, be consoled, adapt their body to holding or movement, and be regular and predictable in responding. Research demonstrates that the absence of these skills by either partner adversely affects parent-infant interaction and later development. Observing early parent-infant interactions during the hospital stay is important in order to identify parent-infant pairs in need of continued monitoring(Barnard, et al., 1989).

  • PDF

Intuitive Manipulation of Deformable Cloth Object Based on Augmented Reality for Mobile Game (모바일 게임을 위한 증강현실 기반 직관적 변형 직물객체 조작)

  • Kim, Sang-Joon;Hong, Min;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.4
    • /
    • pp.159-168
    • /
    • 2018
  • In recent, mobile augmented reality game which has been attracting high attention is considered to be an good approach to increase immersion. In conventional augmented reality-based games that recognize target objects using a mobile camera and show the matching game characters, touch-based interaction is mainly used. In this paper, we propose an intuitive interaction method which manipulates a deformable game object by moving a target image of augmented reality in order to enhacne the immersion of the game. In the proposed method, the deformable object is intuitively manipulated by calculating the distance and direction between the target images and by adjusting the external force applied to the deformable object using them. In this paper, we focus on the cloth deformable object which is widely used for natural object animation in game contents and implement natural cloth simulation interacting with game objects represented by wind and rigid objects. In the experiments, we compare the previous commercial cloth model with the proposed method and show the proposed method can represent cloth animation more realistically.

Development of Table-Top System for Using Educational Contents (교육용 콘텐츠 활용을 위한 테이블 탑 시스템)

  • Kim, ki-hyun;Kim, Jung-hoon;Kang, Maeng-kwan;Park, hyun-woo;Lee, dong-hoon;Yun, tae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.108-111
    • /
    • 2008
  • In this paper, we propose the Table-top system for using educational contents. Table-top system that is matched visual information and interaction space offers easy and intuitive interface. Moreover, because the effectiveness of education is enlarged if the content for education intuitive recognizes and an interaction, the table top system is suitable. In this paper, the multi touch,that is the advantage of the table top system, was implemented in the content for education manufactured as a flash. This system can solve the member of the instinctive interaction between a user and the contents which was short in the system like the existing desktop.

  • PDF

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

An Arrangement Method of Voice and Sound Feedback According to the Operation : For Interaction of Domestic Appliance (조작 방식에 따른 음성과 소리 피드백의 할당 방법 가전제품과의 상호작용을 중심으로)

  • Hong, Eun-ji;Hwang, Hae-jeong;Kang, Youn-ah
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.2
    • /
    • pp.15-22
    • /
    • 2016
  • The ways to interact with digital appliances are becoming more diverse. Users can control appliances using a remote control and a touch-screen, and appliances can send users feedback through various ways such as sound, voice, and visual signals. However, there is little research on how to define which output method to use for providing feedback according to the user' input method. In this study, we designed an experimental study that seeks to identify how to appropriately match the output method - voice and sound - based on the user input - voice and button. We made four types of interaction with two kinds input methods and two kinds of output methods. For the four interaction types, we compared the usability, perceived satisfaction, preference and suitability. Results reveals that the output method affects the ease of use and perceived satisfaction of the input method. The voice input method with sound feedback was evaluated more satisfying than with the voice feedback. However, the keying input method with voice feedback was evaluated more satisfying than with sound feedback. The keying input method was more dependent on the output method than the voice input method. We also found that the feedback method of appliances determines the perceived appropriateness of the interaction.

Ambient Display: Picture Navigation Based on User Movement (앰비언트 디스플레이: 사용자 위치 이동 기반의 사진 내비게이션)

  • Yoon, Yeo-Jin;Ryu, Han-Sol;Park, Chan-Yong;Park, Soo-Jun;Choi, Soo-Mi
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.27-34
    • /
    • 2007
  • In ubiquitous computing, there is increasing demand for ubiquitous displays that react to a user's actions. We propose a method of navigating pictures on an ambient display using implicit interactions. The ambient display can identify the user and measure how far away they are using an RFID reader and ultrasonic sensors. When the user is a long way from the display, it acts as a digital picture and does not attract attention. When the user comes within an appropriate range for interaction, the display shows pictures that are related to the user and provides quasi-3D navigation using the TIP(tour into the picture) method. In addition, menus can be manipulated directly on a touch-screen or remotely using an air mouse. In an emergency, LEDs around the display flash to alert the user.

  • PDF