• Title/Summary/Keyword: Touch gesture

Search Result 73, Processing Time 0.026 seconds

OWC based Smart TV Remote Controller Design Using Flashlight

  • Mariappan, Vinayagam;Lee, Minwoo;Choi, Byunghoon;Kim, Jooseok;Lee, Jisung;Choi, Seongjhin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.71-76
    • /
    • 2018
  • The technology convergence of television, communication, and computing devices enables the rich social and entertaining experience through Smart TV in personal living space. The powerful smart TV computing platform allows to provide various user interaction interfaces like IR remote control, web based control, body gesture based control, etc. The presently used smart TV interaction user control methods are not efficient and user-friendly to access different type of media content and services and strongly required advanced way to control and access to the smart TV with easy user interface. This paper propose the optical wireless communication (OWC) based remote controller design for Smart TV using smart device Flashlights. In this approach, the user smart device act as a remote controller with touch based interactive smart device application and transfer the user control interface data to smart TV trough Flashlight using visible light communication method. The smart TV built-in camera follows the optical camera communication (OCC) principle to decode data and control smart TV user access functions according. This proposed method is not harmful as radio frequency (RF) radiation does it on human health and very simple to use as well user does need to any gesture moves to control the smart TV.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

Interactive drawing with user's intentions using image segmentation

  • Lim, Sooyeon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.73-80
    • /
    • 2018
  • This study introduces an interactive drawing system, a tool that allows user to sketch and draw with his own intentions. The proposed system enables the user to express more creatively through a tool that allows the user to reproduce his original idea as a drawing and transform it using his body. The user can actively participate in the production of the artwork by studying the unique formative language of the spectator. In addition, the user is given an opportunity to experience a creative process by transforming arbitrary drawing into various shapes according to his gestures. Interactive drawing systems use the segmentation of the drawing image as a way to extend the user's initial drawing idea. The system includes transforming a two-dimensional drawing into a volume-like form such as a three-dimensional drawing using image segmentation. In this process, a psychological space is created that can stimulate the imagination of the user and project the object of desire. This process of drawing personification plays a role of giving the user familiarity with the artwork and indirectly expressing his her emotions to others. This means that the interactive drawing, which has changed to the emotional concept of interaction beyond the concept of information transfer, can create a cooperative sensation image between user's time and space and occupy an important position in multimedia society.

General Touch Gesture Definition and Recognition for Tabletop display (테이블탑 디스플레이에서 활용 가능한 범용적인 터치 제스처 정의 및 인식)

  • Park, Jae-Wan;Kim, Jong-Gu;Lee, Chil-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06b
    • /
    • pp.184-187
    • /
    • 2010
  • 본 논문에서는 터치 제스처의 인식을 위해 시도된 여러 방법 중 테이블탑 디스플레이상에서 HMM을 이용한 제스처의 학습과 사용에 대해 제안한다. 터치 제스처는 제스처의 획(stroke)에 따라 single stroke와 multi stroke로 분류할 수 있다. 그러므로 제스처의 입력은 영상프레임에서 터치 궤적에 따라 변하는 방향 벡터를 이용하여 방향코드로 분석될 수 있다. 그리고 분석된 방향코드를 기계학습을 통하여 학습시킨 후, 인식실험에 사용한다. 제스처 인식 학습에는 총 10개의 제스처에 대하여 100개 방향코드 데이터를 이용하였다. 형태를 갖추고 있는 제스처는 미리 정의되어 있는 제스처와 비교를 통하여 인식할 수 있다. (4 방향 드래그, 원, 삼각형, ㄱ ㄴ 모양 >, < ) 미리 정의되어 있는 제스처가 아닌 경우에는 기계학습을 통하여 사용자가 의미를 부여한 후 제스처를 정의하여 원하는 제스처를 선택적으로 사용할 수 있다. 본 논문에서는 테이블탑 디스플레이 환경에서 사용자의 터치제스처를 인식하는 시스템을 구현하였다. 앞으로 테이블탑 디스플레이 환경에서 터치 제스처 인식에 적합한 알고리즘을 찾고 멀티터치 제스처를 인식하는 연구도 이루어져야 할 것이다.

  • PDF

States, Behaviors and Cues of Infants (영아의 상태, 행동, 암시)

  • Kim, Tae-Im
    • Korean Parent-Child Health Journal
    • /
    • v.1
    • /
    • pp.56-74
    • /
    • 1998
  • The language of the newborn, like that of adults, is one of gesture, posture, and expression(Lewis, 1980). Helping parents understand and respond to their newborn's cues will make caring for their baby more enjoyable and may well provide the foundation for a communicative bond that will last lifetime. Infant state provides a dynamic pattern reflecting the full behavioral repertoire of the healthy infant(Brazelton, 1973, 1984). States are organized in a predictable emporal sequence and provide a basic classification of conditions that occur over and over again(Wolff, 1987). They are recognized by characteristic behavioral patterns, physiological changes, and infants' level of responsiveness. Most inportantly, however, states provide caregivers a framework for observing and understanding infants' behavior. When parents know how to determine whether their infant is sleep, awake, or drowsy, and they know the implications, recognition of states has for both the infant's behavior and for their caregiving, then a lot of hings about taking care of a newborn become much easier and more rewarding. Most parents have the skills and desire to do what is best for their infant. The skills 7373parents bring to the interaction are: the ability to read their infant's cues: to stimulate the baby through touch, movement, talking, and looking at: and to respond in a contingent manner to the infant's signals. Among the crucial skills infants bring to the interaction are perceptual abilities: hearing and seeing, the capacity to look at another for a period of time, the ability to smile, be consoled, adapt their body to holding or movement, and be regular and predictable in responding. Research demonstrates that the absence of these skills by either partner adversely affects parent-infant interaction and later development. Observing early parent-infant interactions during the hospital stay is important in order to identify parent-infant pairs in need of continued monitoring(Barnard, et al., 1989).

  • PDF

Comparison Study of Web Application Development Environments in Smartphone (스마트폰 상에서의 웹 응용프로그램 개발 환경 비교)

  • Lee, Go-Eun;Lee, Jong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.12
    • /
    • pp.155-163
    • /
    • 2010
  • Due to the complex registration & downloading process of the native applications, and, software and their non-standardized APIs, mobile web application is now being an alternative software for smartphones. Hybrid web application, one of the types of mobile software, because develop and has reasonable performance by using the webkit engined in smartphones. It can be easily developed by using the exiting programming skill such as HTML, JavaScript and CSS. Additionally these programming techniques can be easily used in any smartphone regardless of its platforms. Most smartphones have a webkit engine or web rending engine for high performance and smooth display in web browser. Webkit is now equipped in iPhone and Android phone. In this paper, we try to find out by comparison that the various aspects of webkit APIs of iPhone & Android phone, such as screen font size, screen orientation, touch event, gesture event and their performance. We also evaluated which one is more convenient for developers when making web programs using webkit. As a result, we found out that webkit in iPhone has more excellent performance than Android.

Digital Mirror System with Machine Learning and Microservices (머신 러닝과 Microservice 기반 디지털 미러 시스템)

  • Song, Myeong Ho;Kim, Soo Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.267-280
    • /
    • 2020
  • Mirror is a physical reflective surface, typically of glass coated with a metal amalgam, and it is to reflect an image clearly. They are available everywhere anytime and become an essential tool for us to observe our faces and appearances. With the advent of modern software technology, we are motivated to enhance the reflection capability of mirrors with the convenience and intelligence of realtime processing, microservices, and machine learning. In this paper, we present a development of Digital Mirror System that provides the realtime reflection functionality as mirror while providing additional convenience and intelligence including personal information retrieval, public information retrieval, appearance age detection, and emotion detection. Moreover, it provides a multi-model user interface of touch-based, voice-based, and gesture-based. We present our design and discuss how it can be implemented with current technology to deliver the realtime mirror reflection while providing useful information and machine learning intelligence.

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.

A Study on Continuity of User Experience in Multi-device Environment (멀티 디바이스 환경에서 사용자 경험의 연속성에 관한 고찰)

  • Lee, Young-Ju
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.495-500
    • /
    • 2018
  • This study examined the factors that can enhance the continuity of user experience in multi - device environment. First of all, regarding the structural difference and continuity of tasks, functional differences such as OS difference according to the characteristics of cross media, use of mouse and touch gesture were found to interfere with continuity. To increase continuity, metaphor and ambience To increase relevance and visibility. In the continuity part of visual memory and cognition, familiarity was given by the identity and similarity of visual perception elements, and it was found that familiarity factors are closely related to continuity. Finally, for the continuity of the user experience, we can see that the visibility factors as well as the meaning and layout consistency of the information are factors for the continuity of the user experience. Based on this, it was found that familiarity, consistency, and correlation were significant influences on continuity dimension of user experience, but visibility did not have a significant effect on continuity when regression analysis was conducted as factors of familiarity, consistency, correlation and visibility.

Hierarchical Hand Pose Model for Hand Expression Recognition (손 표현 인식을 위한 계층적 손 자세 모델)

  • Heo, Gyeongyong;Song, Bok Deuk;Kim, Ji-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1323-1329
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on the dynamic hand movement are used together. In this paper, we propose a hierarchical hand pose model based on finger position and shape for hand expression recognition. For hand pose recognition, a finger model representing the finger state and a hand pose model using the finger state are hierarchically constructed, which is based on the open source MediaPipe. The finger model is also hierarchically constructed using the bending of one finger and the touch of two fingers. The proposed model can be used for various applications of transmitting information through hands, and its usefulness was verified by applying it to number recognition in sign language. The proposed model is expected to have various applications in the user interface of computers other than sign language recognition.