• Title/Summary/Keyword: Gesture Interface

Search Result 231, Processing Time 0.027 seconds

User Interface Design Platform based on Usage Log Analysis (사용성 로그 분석 기반의 사용자 인터페이스 설계 플랫폼)

  • Kim, Ahyoung;Lee, Junwoo;Kim, Mucheol
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.151-159
    • /
    • 2016
  • The user interface is an important factor in providing efficient services to application users. In particular, mobile applications that can be executed anytime and anywhere have a higher priority of usability than applications in other domains.Previous studies have used prototype and storyboard methods to improve the usability of applications. However, this approach has limitations in continuously identifying and improving the usability problems of a particular application. Therefore, in this paper, we propose a usability analysis method using touch gesture data. It could identify and improve the UI / UX problem of the application continuously by grasping the intention of the user after the application is distributed.

Development of Multi Card Touch based Interactive Arcade Game System (멀티 카드 터치기반 인터랙티브 아케이드 게임 시스템 구현)

  • Lee, Dong-Hoon;Jo, Jae-Ik;Yun, Tae-Soo
    • Journal of Korea Entertainment Industry Association
    • /
    • v.5 no.2
    • /
    • pp.87-95
    • /
    • 2011
  • Recently, the issue has been tangible game environment due to the various interactive interface developments. In this paper, we propose the multi card touch based interactive arcade system by using marker recognition interface and multi-touch interaction interface. For our system, the card's location and orientation information is recognized through DI-based recognition algorithm. In addition, the user's hand gesture tracking informations are provided by the various interaction metaphors. The system provides the user with a higher engagement offers a new experience. Therefore, our system will be used in the tangible arcade game machine.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

User-centric Immersible and Interactive Electronic Book based on the Interface of Tabletop Display (테이블탑 디스플레이 기반 사용자 중심의 실감형 상호작용 전자책)

  • Song, Dae-Hyeon;Park, Jae-Wan;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.6
    • /
    • pp.117-125
    • /
    • 2009
  • In this paper, we propose user-centric immersible and interactive electronic book based on the interface of tabletop display. Electronic book is usually used for users that want to read the text book with multimedia contents like video, audio, animation and etc. It is based on tabletop display platform then the conventional input device like keyboard and mouse is not essentially needed. Users can interact with the contents based on the gestures defined for the interface of tabletop display using hand finger touches then it gives superior and effective interface for users to use the electronic book interestingly. This interface supports multiple users then it gives more diverse effects on the conventional electronic contents just made for one user. In this paper our method gives new way for the conventional electronics book and it can define the user-centric gestures and help users to interact with the book easily. We expect our method can be utilized for many edutainment contents.

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

Serious Game Design for Rehabilitation Training with Infrared Ray Pen (적외선 펜을 이용한 재활훈련 기능성 게임 콘텐츠 설계)

  • Ok, Soo-Yol;Kam, Dal-Hyun
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.151-161
    • /
    • 2009
  • In this paper, we propose a serious game which aims to draw the interest of rehabilitants and increase their locomotive ability with an infrared ray(IR) pen interface. The proposed game focuses on providing easy-to-manipulate cognitive rehabilitation environments. In order to achieve the goal, we devised new game interface composed of a Wiimote controller and a IR pen. Moreover, SVM(support vector machine) algorithm was employed for gesture recognition. The proposed game can be successfully utilized not only for rehabilitants but also for aged persons in preventing dementia and promoting their health.

  • PDF

Identification of user's Motion Patterns using Motion Capture System

  • Jung, Kwang Tae;Lee, Jaein
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.6
    • /
    • pp.453-463
    • /
    • 2014
  • Objective:The purpose of this study is to identify motion patterns for cellular phone and propose a method to identify motion patterns using a motion capture system. Background: In a smart device, the introduction of tangible interaction that can provide new experience to user plays an important role for improving user's emotional satisfaction. Firstly, user's motion patterns have to be identified to provide an interaction type using user's gesture or motion. Method: In this study, a method to identify motion patterns using a motion capture system and user's motion patterns for using cellular phone was studied. Twenty-two subjects participated in this study. User's motion patterns were identified through motion analysis. Results: Typical motion patterns for shaking, shaking left and right, shaking up and down, and turning for using cellular phone were identified. Velocity and acceleration for each typical motion pattern were identified, too. Conclusion: A motion capture system could be effectively used to identify user's motion patterns for using cellular phone. Application: Typical motion patterns can be used to develop a tangible user interface for handheld device such as smart phone and a method to identify motion patterns using motion analysis can be applied in motion patterns identification of smart device.

Development of a Hand Pose Rally System Based on Image Processing

  • Suganuma, Akira;Nishi, Koki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.5
    • /
    • pp.340-348
    • /
    • 2015
  • The "stamp rally" is an event that participants go the round with predetermined points for the purpose of collecting stamps. They bring the stamp card to these points. They, however, sometimes leave or lose the card. In this case, they may not reach the final destination of the stamp rally. The purpose of this research is the construction of the stamp rally system which distinguishes each participant with his or her hand instead of the stamp card. We have realized our method distinguishing a hand posture by the image processing. We have also evaluated it by 30 examinees. Furthermore, we have designed the data communication between the server and the checkpoint to implement our whole system. We have also designed and implemented the process for the registering participant, the passing checkpoint and the administration.

Detection of Hand Gesture and its Description for Wearable Applications in IoMTW (IoMTW 에서의 웨어러블 응용을 위한 손 제스처 검출 및 서술)

  • Yang, Anna;Park, Do-Hyun;Chun, Sungmoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.338-339
    • /
    • 2016
  • 손 제스처는 스마트 글래스 등 웨어러블 기기의 NUI(Natural User Interface)로 부각되고 있으며 이를 위해서는 손 제스처 검출 및 인식 기능이 요구된다. 또한, 최근 MPEG 에서는 IoT(Internet of Thing) 환경에서의 미디어 소비를 위한 표준으로 IoMTW(Media-centric IoT and Wearable) 사전 탐색이 진행되고 있으며, 손 제스처를 표현하기 위한 메타데이터도 하나의 표준 기술요소로 논의되고 있다. 본 논문에서는 스마트 글래스 환경에서의 손 제스처 인식을 위한 과정으로 스테레오 영상을 통한 손 윤곽선 검출과 이를 메타데이터로 서술하기 위하여 베지에(Bezier) 곡선으로 표현하는 기법을 제시한다.

  • PDF