• Title/Summary/Keyword: Touch gestures

Search Result 45, Processing Time 0.019 seconds

A 3D Parametric CAD System for Smart Devices (스마트 디바이스를 위한 3D 파라메트릭 CAD 시스템)

  • Kang, Yuna;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.2
    • /
    • pp.191-201
    • /
    • 2014
  • A 3D CAD system that can be used on a smart device is proposed. Smart devices are now a part of everyday life and also are widely being used in various industry domains. The proposed 3D CAD system would allow modeling in a rapid and intuitive manner on a smart device when an engineer makes a 3D model of a product while moving in an engineering site. There are several obstacles to develop a 3D CAD system on a smart device such as low computing power and the small screen of smart devices, imprecise touch inputs, and transfer problems of created 3D models between PCs and smart devices. The author discusses the system design of a 3D CAD system on a smart device. The selection of the modeling operations, the assignment of touch gestures to these operations, and the construction of a geometric kernel for creating both meshes and a procedural CAD model are introduced. The proposed CAD system has been implemented for validation by user tests and to demonstrate case studies using test examples. Using the proposed system, it is possible to produce a 3D editable model swiftly and simply in a smart device environment to reduce design time of engineers.

User Interface Design Platform based on Usage Log Analysis (사용성 로그 분석 기반의 사용자 인터페이스 설계 플랫폼)

  • Kim, Ahyoung;Lee, Junwoo;Kim, Mucheol
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.151-159
    • /
    • 2016
  • The user interface is an important factor in providing efficient services to application users. In particular, mobile applications that can be executed anytime and anywhere have a higher priority of usability than applications in other domains.Previous studies have used prototype and storyboard methods to improve the usability of applications. However, this approach has limitations in continuously identifying and improving the usability problems of a particular application. Therefore, in this paper, we propose a usability analysis method using touch gesture data. It could identify and improve the UI / UX problem of the application continuously by grasping the intention of the user after the application is distributed.

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

MRF Particle filter-based Multi-Touch Tracking and Gesture Likelihood Estimation (MRF 입자필터 멀티터치 추적 및 제스처 우도 측정)

  • Oh, Chi-Min;Shin, Bok-Suk;Klette, Reinhard;Lee, Chil-Woo
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.16-24
    • /
    • 2015
  • In this paper, we propose a method for multi-touch tracking using MRF-based particle filters and gesture likelihood estimation Each touch (of one finger) is considered to be one object. One of frequently occurring issues is the hijacking problem which means that an object tracker can be hijacked by neighboring object. If a predicted particle is close to an adjacent object then the particle's weight should be lowered by analysing the influence of neighboring objects for avoiding hijacking problem. We define a penalty function to lower the weights of those particles. MRF is a graph representation where a node is the location of a target object and an edge describes the adjacent relation of target object. It is easy to utilize MRF as data structure of adjacent objects. Moreover, since MRF graph representation is helpful to analyze multi-touch gestures, we describe how to define gesture likelihoods based on MRF. The experimental results show that the proposed method can avoid the occurrence of hijacking problems and is able to estimate gesture likelihoods with high accuracy.

Visual Multi-touch Input Device Using Vision Camera (비젼 카메라를 이용한 멀티 터치 입력 장치)

  • Seo, Hyo-Dong;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.718-723
    • /
    • 2011
  • In this paper, we propose a visual multi-touch air input device using vision cameras. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To do this, first, we propose an image processing algorithm based on the HSV color model and the labeling from obtained images. Also, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.

OnDot: Braille Training System for the Blind (시각장애인을 위한 점자 교육 시스템)

  • Kim, Hak-Jin;Moon, Jun-Hyeok;Song, Min-Uk;Lee, Se-Min;Kong, Ki-sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.41-50
    • /
    • 2020
  • This paper deals with the Braille Education System which complements the shortcomings of the existing Braille Learning Products. An application dedicated to the blind is configured to perform full functions through touch gestures and voice guidance for user convenience. Braille kit is produced for educational purposes through Arduino and 3D printing. The system supports the following functions. First, the learning of the most basic braille, such as initial consonants, final consonant, vowels, abbreviations, etc. Second, the ability to check learned braille by solving step quizzes. Third, translation of braille. Through the experiment, the recognition rate of touch gestures and the accuracy of braille expression were confirmed, and in case of translation, the translation was done as intended. The system allows blind people to learn braille efficiently.

Implementation of Virtual Touch Service Using Hand Gesture Recognition (손동작 인식을 이용한 가상 터치 서비스 구현)

  • A-Ra Cho;Seung-Bae Yoo;Byeong-Hun Yun;Hyung-Ju Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.10
    • /
    • pp.505-512
    • /
    • 2024
  • As the need for hygiene management increases due to COVID-19, the importance of non-contact services is gaining attention. Hands, a tool for expressing intentions and conveying information, are emerging as an alternative to computer input devices such as the keyboard and mouse. In this study, we propose a method to address public health problems that arise when using unmanned ordering machines by controlling a computer using hand gestures detected through a camera. The focus is on identifying frequently used hand gestures, especially the bending of the index finger. To this end, we develop a non-contact input device using the MediaPipe framework and the long short-term memory (LSTM) model. This approach can identify hand gestures in three-dimensional space and provides scenarios that can be applied to the fields of virtual reality (VR) and augmented reality (AR). It offers improved public health and user experience by presenting methods that can be applied to various situations such as navigation systems and unmanned ordering machines.

A Study on Information Architecture & User Experience of the Smartphone (스마트폰의 정보구조와 사용자경험)

  • Lee, Young-Ju
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.383-390
    • /
    • 2015
  • In this study it placed the object of the present invention is to provide a more efficient user interface experience to analyze the structure information and the user experience when using the pattern of the search with the number of intended use of the smart phone. Naver and Daum were the results of the study will consist of 28 dogs and 15 each category Naver and Daum had both a top-down sequential structure. In the case of Naver it has raised the possibility of cognitive load through the use of duplicate content and excessive scrolling news Daum has been in the case of shopping categories at the bottom of this error was raised the possibility of using touch gestures.

Learning System for Scientific Experiments with Multi-touch Screen and Tangible User Interface (멀티 터치스크린과 실감형 인터페이스를 적용한 과학 실험 학습 시스템)

  • Kim, Jun-Woo;Maeng, Jun-Hee;Joo, Jee-Young;Im, Kwang-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.461-471
    • /
    • 2010
  • Recently, Augmented Reality(AR) technologies have been emerged, which shows the types of digital contents integrating real and virtual worlds. To maximize the effect of AR technology, tangible user interface, which enables users to interact with the contents in the same way in which they manipulate objects in real world, is applied. In particular, we expect that the technologies are able to enhance learners' interests and degree of immersion, and produce new learning contents in order to maximize the effect of learning. In this paper, we propose a learning system for scientific experiments with multi-touch screen and tangible user interface. The system consists of an experiment table equipped with a large multi-touch screen and a realistic learning device that can detect the user's simple gestures. In real world, some scientific experiments involve high cost, long time or dangerous objects, but this system will overcome such hindrance and provide learners with a variety of experiment experience in realistic ways.

Multi - Modal Interface Design for Non - Touch Gesture Based 3D Sculpting Task (비접촉식 제스처 기반 3D 조형 태스크를 위한 다중 모달리티 인터페이스 디자인 연구)

  • Son, Minji;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.5
    • /
    • pp.177-190
    • /
    • 2017
  • This research aims to suggest a multimodal non-touch gesture interface design to improve the usability of 3D sculpting task. The task and procedure of design sculpting of users were analyzed across multiple circumstances from the physical sculpting to computer software. The optimal body posture, design process, work environment, gesture-task relationship, the combination of natural hand gesture and arm movement of designers were defined. The preliminary non-touch 3D S/W were also observed and natural gesture interaction, visual metaphor of UI and affordance for behavior guide were also designed. The prototype of gesture based 3D sculpting system were developed for validation of intuitiveness and learnability in comparison to the current S/W. The suggested gestures were proved with higher performance as a result in terms of understandability, memorability and error rate. Result of the research showed that the gesture interface design for productivity system should reflect the natural experience of users in previous work domain and provide appropriate visual - behavioral metaphor.