• Title/Summary/Keyword: User-Computer interface

Search Result 1,180, Processing Time 0.029 seconds

A Study of Incremental and Multiple Entry Support Parser for Multi View Editing Environment (다중 뷰 편집환경을 위한 점진적 다중진입 지원 파서에 대한 연구)

  • Yeom, Saehun;Bang, Hyeja
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.3
    • /
    • pp.21-28
    • /
    • 2018
  • As computer performance and needs of user convenience increase, computer user interface are also changing. This changes had great effects on software development environment. In past, text editors like vi or emacs on UNIX OS were the main development environment. These editors are very strong to edit source code, but difficult and not intuitive compared to GUI(Graphical User Interface) based environment and were used by only some experts. Moreover, the trends of software development environment was changed from command line to GUI environment and GUI Editor provides usability and efficiency. As a result, the usage of text based editor had decreased. However, because GUI based editor use a lot of computer resources, computer performance and efficiency are decreasing. The more contents are, the more time to verify and display the contents it takes. In this paper, we provide a new parser that provide multi view editing, incremental parsing and multiple entry of abstract syntax tree.

A Study on Development of VUI(Voice User Interface) using VoiceXML (VoiceXML을 이용한 VUI 개발에 관한 연구)

  • Jang, Min-Seok;Yang, Woon-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04b
    • /
    • pp.1495-1498
    • /
    • 2002
  • 한국현재의 컴퓨팅환경은 Text위주의 Command Line상에서의 입출력에서 GUI(Graphic User Interface)환경으로 전환되었다. 이는 사용자에게 좀더 친근한 방법으로의 컴퓨팅환경을 제공하고 있는 것이다. 하지만 아직까지 그러한 환경에 익숙해지기 위해서는 많은 습득시간이 필요하며 또한 응용프로그램간의 인터페이싱 기능 등을 익히기 위해서는 추가적인 학습을 통해야 원활한 작업을 수행할 수 있다.이를 해결하고자 본 연구는 음성인식/ 합성과, 현재 음성마크업 언어인 VoiceXML 등을 통해서 모색해보고자 한다.

  • PDF

Software Architecture of Contents-based Control for Co-operative Remote Manipulation of Multi-Robots

  • Thuy, Dinh Trong;Kang, Soon-Ju
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.508-511
    • /
    • 2008
  • In this paper, we propose software architecture for conveying contents-based OpenSound Control (OSC) packet from manipulation user interface to cooperative remote multi-robots. The Flash application is used as a controlling user interface and the physical prototyping of multi-robots were developed using physical prototyping toolkit.

Evaluating the Effectiveness of Nielsen's Usability Heuristics for Computer Engineers and Designers without Human Computer Interaction Background (비 HCI 전공자들을 대상으로 한 Nielsen의 Usability Heuristics에 대한 이해 정도 평가)

  • Jeong, YoungJoo;Sim, InSook;Jeong, GooCheol
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.2 no.2
    • /
    • pp.165-171
    • /
    • 2010
  • Usability heuristics("heuristics") are general principles for usability evaluation during user interface design. Our ultimate goal is to extend the practice of usability evaluation methods to a wider audience(e.g. user interface designers and engineers) than Human Computer Interaction(HCI) professionals. To this end, we explored the degree to which Jakob Nielsen's ten usability heuristics are understood by professors and students in design and computer engineering. None of the subjects received formal training in HCI, though some may have had an awareness of some HCI principles. The study identified easy-to-understand heuristics, examined the reasons for the ambiguities in others, and discovered differences between the responses of professors and students to the heuristics. In the course of the study, the subjects showed an increased tendency to think in terms of user-centric design. Furthermore, the findings in this study offer suggestions for improving these heuristics to resolve ambiguities and to extend their practice for user interface designers and engineers.

  • PDF

Visual Cohesion Improvement Technology by Clustering of Abstract Object (추상화 객체의 클러스터링에 의한 가시적 응집도 향상기법)

  • Lee Jeong-Yeal;Kim Jeong-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.61-69
    • /
    • 2004
  • The user interface design needs to support the complex interactions between human and computers. It also requires comprehensive knowledges many areas to collect customer's requirements and negotiate with them. The user interface designer needs to be a graphic expert, requirement analyst, system designer, programmer, technical expert, social activity scientist, and so on. Therefore, it is necessary to research on an designing methodology of user interface for satisfying various expertise areas. In the paper, We propose the 4 business event's abstract object visualizing phases such as fold abstract object modeling, task abstract object modeling, transaction abstract object modeling, and form abstract object modeling. As a result, this modeling method allows us to enhance visual cohesion of UI, and help unskilled designer to can develope the higy-qualified user interface.

  • PDF

Graphical User Interface in a Web-based Application System for Primary School Children -Application for the Creative Group Thinking System(CGTS)- (초등학생용 웹 기반 응용프로그램의 GUI에 관한 연구 -창의성 개발 지원 시스템(CGTS)의 적용을 중심으로-)

  • Han Kyung Don;Kim Mi Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.1 s.33
    • /
    • pp.53-58
    • /
    • 2005
  • Interfaces comprising menu and icons of web-based application programs should comply pertinently with user's cognitive system. In Particular, since the application Program's specific function in this study aimed at the purpose of child's effective idea, with respect to that, it is necessary to constitute a rational web use environment through careful research and analysis of user interface. The purpose of this study was to enhance elementary students to modify the CGTS with the menu arrangement. icon types and preferable terms for their use of professionally letter-based screen structure of CGTS developed to help the designer's idea.

  • PDF

Design and Implementation of Graphic User Interface for multimedia device on Real-Time Operating System (실시간 운영체제 UbiFOS$^{TM}$ 에서 멀티미디어 기기를 위한 Graphic User Interface 설계 및 구현)

  • Lee, Won-Yong;Lee, Cheol-Hoon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.399-403
    • /
    • 2006
  • 실시간 운영체제(Real Time System)를 탑재한 내장형 시스템(Embedded System)은 수십 년 전부터 다양한 용도로 개발되어 왔다. 그래픽 장치들이 미비했던 초기의 내장형 시스템에서는 사용자 인터페이스가 단순하게 구현되었으나, 기술의 발달로 인하여 사용자가 쉽게 이용할 수 있게 GUI(Graphic User Interface)가 적용될 필요가 있다. 멀티미디어 기기에서 요구되는 포토 뷰, MP3P, 동영상과 같은 기능들을 만족 시키고, 또한 내장형 시스템의 특성상 GUI 가 경량이어야 한다. 본 논문에서는 실시간 운영체제인 UbiFOS$^{TM}$ 에 멀티미디어 기기를 위한 UbiFOS_GUI 를 설계 및 구현하였다.

  • PDF

Glanceable and Informative WearOS User Interface for Kids and Parents

  • Kim, Siyeon;Yoon, Hyoseok
    • Journal of Multimedia Information System
    • /
    • v.8 no.1
    • /
    • pp.17-22
    • /
    • 2021
  • This paper proposes a wearable user interface intended for kids and parents using WearOS smartwatches. We first review what constitutes a kids smartwatch and then design UI components for watchfaces to be used by kids and parents. Different UI components ranging from activity, education, voice search, app usage, video, location, health, and quick dial are described. These components are either implemented as complications or on watchfaces and may require on-device standalone function, cross-device communication, and external database. We introduce a theme-based amusing UI for kids whereas simple and easily accessible components are recommended to parents' watchface. To illustrate use cases, we present 3 scenarios for enhancing communication between parents and child. To show feasibility and potential of our approach, we implement our proof-of-concept using commercial smartwatches, smartphones, and external cloud database. Furthermore, performance of checking app usages on different devices are presented, followed by discussion on limitations and future work.

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.