• Title/Summary/Keyword: Multi-touch Gesture

Search Result 28, Processing Time 0.019 seconds

A 3D Parametric CAD System for Smart Devices (스마트 디바이스를 위한 3D 파라메트릭 CAD 시스템)

  • Kang, Yuna;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.2
    • /
    • pp.191-201
    • /
    • 2014
  • A 3D CAD system that can be used on a smart device is proposed. Smart devices are now a part of everyday life and also are widely being used in various industry domains. The proposed 3D CAD system would allow modeling in a rapid and intuitive manner on a smart device when an engineer makes a 3D model of a product while moving in an engineering site. There are several obstacles to develop a 3D CAD system on a smart device such as low computing power and the small screen of smart devices, imprecise touch inputs, and transfer problems of created 3D models between PCs and smart devices. The author discusses the system design of a 3D CAD system on a smart device. The selection of the modeling operations, the assignment of touch gestures to these operations, and the construction of a geometric kernel for creating both meshes and a procedural CAD model are introduced. The proposed CAD system has been implemented for validation by user tests and to demonstrate case studies using test examples. Using the proposed system, it is possible to produce a 3D editable model swiftly and simply in a smart device environment to reduce design time of engineers.

Design and Implementation of Multi-recognition for Wireless Presenter (다중인식 기반의 무선 프리젠터 설계 및 구현)

  • Yoo, Sang-Hyun;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.01a
    • /
    • pp.295-298
    • /
    • 2014
  • 본 연구는 안드로이드 스마트폰 시스템을 기반으로 하여 점점 증가하는 프레젠테이션의 중요성 및 필요성에 따라 프레젠테이션을 효율적으로 진행하도록 돕는 무선 프리젠터 애플리케이션 개발에 관한 연구이다. 최근의 스마트폰은 휴대하기 편리한 특징과 다양한 기능들로 인해 빠른 속도로 보급되고 있으며, 여러 가지 기능의 센서가 내장되어 있다. 이러한 특징을 활용하여 별다른 기기 없이 스마트폰 자체를 무선 프리젠터로 사용할 수 있으며, 이에 대한 연구 및 개발들이 진행되고 있다. 하지만 기존의 프리젠터 애플리케이션들은 인터페이스가가 너무 복잡해서 사용하기 불편하거나, 혹은 인터페이스가 너무 단순하여 제한된 기능만을 가지는 단점을 가진다. 본 논문에서는 이러한 단점을 극복하기 위해서 제스쳐 인식을 활용하여 직관적으로 애플리케이션 사용이 가능하며, 멀티터치 제스처를 활용함으로써 다양한 기능을 수용할 수 있는 무선 프리젠터를 제안한다.

  • PDF

Analyzing the Efficient Elements on Multi-Touch Based Device for Web Application (웹 어플리케이션을 위한 멀티터치 기반 시스템의 효율적 요소 분석)

  • Cho, Jae-Joon;Jang, Hyun-Su;Cho, Ok-Hue;Lee, Won-Hyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.915-920
    • /
    • 2009
  • In terms of development of intelligent information devices, the most important element is the function which is able to deliver the information user wants to input on the device without complexities. By using the mean such as our voice, expression, gesture and physical contact we use in our daily routine, we are able to interact with device easily and simply. In this paper, we are initiated to present interactive surface system, in which allows users to use their hand gesture as a function of mouse and keyboard free, to be interacted within the platform of web application.

  • PDF

General Touch Gesture Definition and Recognition for Tabletop display (테이블탑 디스플레이에서 활용 가능한 범용적인 터치 제스처 정의 및 인식)

  • Park, Jae-Wan;Kim, Jong-Gu;Lee, Chil-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06b
    • /
    • pp.184-187
    • /
    • 2010
  • 본 논문에서는 터치 제스처의 인식을 위해 시도된 여러 방법 중 테이블탑 디스플레이상에서 HMM을 이용한 제스처의 학습과 사용에 대해 제안한다. 터치 제스처는 제스처의 획(stroke)에 따라 single stroke와 multi stroke로 분류할 수 있다. 그러므로 제스처의 입력은 영상프레임에서 터치 궤적에 따라 변하는 방향 벡터를 이용하여 방향코드로 분석될 수 있다. 그리고 분석된 방향코드를 기계학습을 통하여 학습시킨 후, 인식실험에 사용한다. 제스처 인식 학습에는 총 10개의 제스처에 대하여 100개 방향코드 데이터를 이용하였다. 형태를 갖추고 있는 제스처는 미리 정의되어 있는 제스처와 비교를 통하여 인식할 수 있다. (4 방향 드래그, 원, 삼각형, ㄱ ㄴ 모양 >, < ) 미리 정의되어 있는 제스처가 아닌 경우에는 기계학습을 통하여 사용자가 의미를 부여한 후 제스처를 정의하여 원하는 제스처를 선택적으로 사용할 수 있다. 본 논문에서는 테이블탑 디스플레이 환경에서 사용자의 터치제스처를 인식하는 시스템을 구현하였다. 앞으로 테이블탑 디스플레이 환경에서 터치 제스처 인식에 적합한 알고리즘을 찾고 멀티터치 제스처를 인식하는 연구도 이루어져야 할 것이다.

  • PDF

A Study on Continuity of User Experience in Multi-device Environment (멀티 디바이스 환경에서 사용자 경험의 연속성에 관한 고찰)

  • Lee, Young-Ju
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.495-500
    • /
    • 2018
  • This study examined the factors that can enhance the continuity of user experience in multi - device environment. First of all, regarding the structural difference and continuity of tasks, functional differences such as OS difference according to the characteristics of cross media, use of mouse and touch gesture were found to interfere with continuity. To increase continuity, metaphor and ambience To increase relevance and visibility. In the continuity part of visual memory and cognition, familiarity was given by the identity and similarity of visual perception elements, and it was found that familiarity factors are closely related to continuity. Finally, for the continuity of the user experience, we can see that the visibility factors as well as the meaning and layout consistency of the information are factors for the continuity of the user experience. Based on this, it was found that familiarity, consistency, and correlation were significant influences on continuity dimension of user experience, but visibility did not have a significant effect on continuity when regression analysis was conducted as factors of familiarity, consistency, correlation and visibility.

Digital Mirror System with Machine Learning and Microservices (머신 러닝과 Microservice 기반 디지털 미러 시스템)

  • Song, Myeong Ho;Kim, Soo Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.267-280
    • /
    • 2020
  • Mirror is a physical reflective surface, typically of glass coated with a metal amalgam, and it is to reflect an image clearly. They are available everywhere anytime and become an essential tool for us to observe our faces and appearances. With the advent of modern software technology, we are motivated to enhance the reflection capability of mirrors with the convenience and intelligence of realtime processing, microservices, and machine learning. In this paper, we present a development of Digital Mirror System that provides the realtime reflection functionality as mirror while providing additional convenience and intelligence including personal information retrieval, public information retrieval, appearance age detection, and emotion detection. Moreover, it provides a multi-model user interface of touch-based, voice-based, and gesture-based. We present our design and discuss how it can be implemented with current technology to deliver the realtime mirror reflection while providing useful information and machine learning intelligence.

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.