• Title/Summary/Keyword: Intuitive interface

Search Result 234, Processing Time 0.035 seconds

The Mouse & Keyboard Control Application based on Smart Phone (스마트 폰 기반의 마우스와 키보드 제어 어플리케이션)

  • Lim, Yang Mi
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.396-403
    • /
    • 2017
  • In recent years, the use of touch screens has expanded, and devices such as remote controllers have been developed in various ways to control and access contents at long range. The wireless-based touch screen is used in classroom, seminar room, and remote video conversation in addition to the TV remote control. The purpose of the study is to design a smart phone-based intuitive interface that can perform the role of a wireless mouse and a wireless keyboard at range using Bluetooth and to develop an application that integrates functions of a mouse and a keyboard. Firstly, touch interaction model for controlling software such as PowerPoint by connecting to a general PC on a smart phone has been studied. The most simple touch operation interface is used to reproduce the function of existing devices and design more simply. The studies of the extension of interfaces with various functions are important, but development of optimized interfaces for users will become more important in the future. In this sense, this study is valuable.

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Tension Based 7 DOEs Force Feedback Device: SPIDAR-G

  • Kim, Seahak;Yasuharu Koike;Makoto Sato
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.9-16
    • /
    • 2002
  • In this paper, we intend to demonstrate a new intuitive force-feedback device for advanced VR applications. Force feed-back for the device is tension based and is characterized by 7 degrees of freedom (DOF); 3 DOF for translation, 3 DOF for rotation, and 1 DOF for grasp). The SPIDAR-G (Space Interface Device for Artificial Reality with Grip) will allow users to interact with virtual objects naturally by manipulating two hemispherical grips located in the center of the device frame. We will show how to connect the strings between each vertex of grip and each extremity of the frame in order to achieve force feedback. In addition, methodologies will be discussed for calculating translation, orientation and grasp using the length of 8 strings connected to the motors and encoders on the frame. The SPIDAR-G exhibits smooth force feedback, minimized inertia, no backlash, scalability and safety. Such features are attributed to strategic string arrangement and control that results in stable haptic rendering. The design and control of the SPIDAR-G will be described in detail and the Space Graphic User Interface system based on the proposed SPIDAR-G system will be demonstrated. Experimental results validate the feasibility of the proposed device and reveal its application to virtual reality.

Systemic Development of Tele-Robotic Interface for the Hot-Line Maintenance (활선 작업을 위한 원격 조종 인터페이스 개발)

  • Kim Min-Soeng;Lee Ju-Jang;Kim Chang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.12
    • /
    • pp.1217-1222
    • /
    • 2004
  • This paper describes the development of tele-robotic interface for the hot-line maintenance robot system. One of main issues in designing human-robot interface for the hot-line maintenance robot system is to plan the control procedure for each part of the robotic system. Another issue is that the actual degree of freedom (DOF) in the hot-line maintenance robot system is much greater than that of available control devices such as joysticks and gloves in the remote-cabin. For this purpose, a virtual simulator, which includes the virtual hot-line maintenance robot system and the environment, is developed in the 3D environment using CAD data. It is assumed that the control operation is done in the remote cabin and the overall work process is observed using the main-camera with 2 DOFs. For the input device, two joysticks, one pedal, two data gloves, and a Head Mounted Display (HMD) with tracker sensor were used. The interface is developed for each control mode. Designed human-interface system is operated using high-level control commands which are intuitive and easy to understand without any special training.

Design of Markov Decision Process Based Dialogue Manager (마르코프 의사결정 과정에 기반한 대화 관리자 설계)

  • Choi, Joon-Ki;Eun, Ji-Hyun;Chang, Du-Seong;Kim, Hyun-Jeong;Koo, Myong-Wan
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.14-18
    • /
    • 2006
  • The role of dialogue manager is to select proper actions based on observed environment and inferred user intention. This paper presents stochastic model for dialogue manager based on Markov decision process. To build a mixed initiative dialogue manager, we used accumulated user utterance, previous act of dialogue manager, and domain dependent knowledge as the input to the MDP. We also used dialogue corpus to train the automatically optimized policy of MDP with reinforcement learning algorithm. The states which have unique and intuitive actions were removed from the design of MDP by using the domain knowledge. The design of dialogue manager included the usage of natural language understanding and response generator to build short message based remote control of home networked appliances.

  • PDF

A Intuitive shading interface for rendering realistic skin (사실적인 피부 렌더링을 위한 직관적 쉐이딩 인터페이스)

  • 유태경;이원형;장성갑
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11b
    • /
    • pp.585-588
    • /
    • 2003
  • 최근의 영화나 애니메이션에 자주 등장하는 디지털 인간 캐릭터의 제작에서 사실적인 피부 렌더링을 위하여 다양한 공학적, 예술적 접근이 이루어지고 있다. 본 논문에서는 사실적인 피부렌더링을 위해 subsurface illumination 테크닉을 이용한 쉐이더를 구현하고 아티스트들의 효율적인 제어를 위해 보다 직관적인 인터페이스를 제안하였다. 구현한 쉐이더는 3D 그래픽 응용 프로그램인 Maya의 플러그인 형태로 제작되었으며 기존의 전통적인 쉐이딩 인터페이스와의 조화를 고려하여 제안된 매개변수들을 확장 형태로 제공하였다.

  • PDF

Navigation Interface Design for an Intuitive Access to Digital Broadcasting Services (디지털 방송 서비스의 직관적 접근을 위한 네비게이션 인터페이스 설계)

  • Lee, Keon-Young;Jung, Moon-Ryul
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.154-163
    • /
    • 2009
  • Traditional broadcasting environment is merged with various interactive broadcasting services, and the users use these services through Set-Top-Boxes. But they have limited user interfaces, which are not suitable for TV environments by adopting internet formats. Therefore, this paper suggests a User Navigating Interface which helps the audiences to select and watch their favorite channels more effectively. Firstly, this paper analyzes EPG interface - the most basic service in interactive broadcasting. Secondly, it suggests a user interface and service structure suitable in the network-integrated TV environment.

Development of User Interface for Motion-based Screen Sports Game (체감형 스크린 스포츠 게임 유저 인터페이스 개발)

  • Yoo, Wang-Yun;Oh, Jong-Hwan
    • Journal of Korea Game Society
    • /
    • v.17 no.1
    • /
    • pp.109-118
    • /
    • 2017
  • The screen sports game is a motion-based game developed by combining the PC game and the sensor so that the user can directly participate in the game. Following screen golf, which was very popular in the 2000s, screen baseball has been on the rise since 2016. Most of the games released today employ the interface of traditional PC games. However, there is a need for a method that allows the user to manipulate the screen baseball in a more intuitive manner. In this study, we utilize a Kiosk independent of the game, which enables users to intervene naturally without disturbing the flow of the game. Such a Kiosk-based interface results in increased immersion in the game.

Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface (차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점)

  • Kim, Kihyun;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.

Conditions of Applications, Situations and Functions Applicable to Gesture Interface

  • Ryu, Tae-Beum;Lee, Jae-Hong;Song, Joo-Bong;Yun, Myung-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.507-513
    • /
    • 2012
  • Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.