• Title/Summary/Keyword: HCI(Human Computer Interface)

Search Result 117, Processing Time 0.024 seconds

Using GOMS for User Interface Evaluation (GOMS를 이용한 사용자 인터페이스 평가)

  • Jeon, Young-Joo;Back, Ji-Seung;Myung, Ro-Hae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1045-1052
    • /
    • 2009
  • 컴퓨터의 빠른 발달과 보급 속도로 인해 사용자와 컴퓨터 간의 상호작용(Human-Computer Interaction; HCI)이 중요해 지면서, 인터페이스 개발과 평가가 강조되고 있다. 본 논문에서는 컴퓨터 시스템의 정량적 사용성 평가에 널리 이용되는 GOMS에 대한 이해와 함께 인터페이스 평가에서의 GOMS의 한계와 개선 방향에 대하여 알아보고자 한다. 먼저 연구 대상이 되는 컴퓨터 작업 환경을 선정하고, 특정 작업에 대한 작업 분석을 실시하였다. 그리고 작업 분석 결과를 바탕으로 NGOMSL을 이용하여 모델링 하였다. 모델링 결과 작업 Operators의 구성을 알 수 있었고, 총 수행시간과 학습시간을 예측할 수 있었다. GOMS 모델링 결과가 실제 사람의 수행 결과와 얼마나 일치하는지를 비교하기 위하여 Empirical Test를 실시하였고, 그 결과 GOMS 모델링을 통해 예측된 수행시간과 실험을 통해 얻은 총 수행시간 사이에는 큰 차이가 있음을 알 수 있었다. 이러한 차이를 줄이고 실제 사용자의 수행과정과 유사하게 모델링을 하기 위하여 두 가지 가정을 바탕으로 GOMS 모델을 개선하였다. 본 연구를 통하여 GOMS 모델링은 컴퓨터 시스템의 효과적인 상대적 사용성 평가 도구로 활용될 수 있음을 확인할 수 있었다.

  • PDF

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

An interface development f of an auto PC using task analysis and scenario-based design (직무 분석과 시나리오 기반 설계를 이용한 Auto PC 인터페이스 개발)

  • Park, Jun-Ho;Jeon, Myoung-Hoon;Ahn, Jung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02b
    • /
    • pp.548-553
    • /
    • 2007
  • Digital convergence 및 ubiquitous 환경이 보편화되면서 computer system이 대부분의 전자 제품에 삽입되고 있다. 제품의 인터페이스는 더욱 복잡해지고 있으며, 사용 방법 또한 어려워지고 있는 상황이다. 이 중 Auto PC는 대표적인 예로, 차량 운전 환경에서 네비게이션, 음악, 동영상, DMB, PC 등의 기능을 지원하는 복합 제품이다. 이러한 제품의 사용자 인터페이스 설계를 위해서는 제품을 사용하면서 일어나는 행동들과 이들의 흐름을 이해할 수 있는 시나리오 설계에 관한 연구가 필수적이다. 인간공학 및 HCI 분야에서는 사용자 인터페이스를 설계하고 평가할 수 있는 방법론 및 도구들이 많이 소개되고 있다. 또한 이러한 방법 및 도구들을 상황에 맞게 선택하고 종합하고자 하는 연구 역시 다양하게 이루어지고 있다. 본 연구에서는 인간 중심 설계(human centered design) 방법론 중 대표적인 시나리오 기반 설계(SBD: Scenario-Based Design)과 직무분석(Task Analysis)을 이용하여 Auto PC의 인터페이스 설계에 적용해보았다. 시나리오 기반 설계와 직무분석 두 방법론을 통합할 수 있는 프로세스를 적용하였고, 그 결과 각 방법론의 단점을 서로 보완할 수 있었다. 추후 prototype을 완성하고 사용성 평가 및 만족도 조사를 통하여 설계된 인터페이스를 검증하는 연구가 진행될 예정이다.

  • PDF

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF

A Study on Efficient Design of Surveillance RADAR Interface Control Unit in Naval Combat System

  • Dong-Kwan Kim;Dong-Han Jung;Won-Seok Jang;Young-San Kim;Hyo-Jo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.125-134
    • /
    • 2023
  • In this paper, we propose an efficient surveillance RADAR(RAdio Detection And Ranging) interface control unit(ICU) design in the naval combat system. The proposed design applied a standardized architecture for modules that can be shared in ship combat system software. An error detection function for each link was implemented to increase the recognition speed of disconnection. Messages that used to be sent periodically for human-computer interaction(HCI) are now only transmitted when there is a change in the datagram. This can reduce the processing load of the console. The proposed design supplements the radar with the waterfall scope and time-limited splash recognition in relation to the hit check and zeroing of the shot when the radar processing ability is low due to the adoption of a low-cost commercial radar in the ship. Therefore, it is easy for the operator to determine whether the shot is hit or not, the probability of wrong recognition can be reduced, and the radar's resources can be obtained more effectively.

The new paths of user interface #1 - The non-verbal communication for the interactive media - (사용자 인터페이스의 새로운 길 #1 - 인터렉티브 미디어를 위한 비언어적 의사소통 방법 -)

  • 류제성
    • Archives of design research
    • /
    • v.13 no.3
    • /
    • pp.49-58
    • /
    • 2000
  • We commonly use the computer interface a generalized form. However, the requirement of the user various and some users cannot apply the general circumstance. For these requirements, this research suggests the non-verbal communication. The suggestion is that Mewing with the mouth in human behavior applies to the interaction of the computer This was offered in three forms. First, drawing application: second, the arcade game: third, the interactive book. in condusion, we confirmed that the suggestion of this research could be effectively used for the development of the human computer interface.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

The evaluation of Word Processors by Learning Model (학습모형을 이용한 워드프로세서의 평가방법 개발)

  • 손일문;홍상우;이상철
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.41
    • /
    • pp.203-212
    • /
    • 1997
  • The interface of computer software has to promote human-computer interaction. The one quality of interface to promote HCI should be evaluted with regard to user's information processing. The usability of interface is one of the main components of it's quality, and it is straightforwardly concerned with learnability, especially when users want to use a software at the first stage. In this paper, word processors, wide spreadly used in OA environments is studied in respect to menu structure on the interface. An cognitive menu structure is suggested by user's conceptual network of the main functions of word processor. Two word processors is selected to compare with the cognitive menu structure and to evalute learnabilities by teaming model.

  • PDF

A Study of Hand Gesture Recognition for Human Computer Interface (컴퓨터 인터페이스를 위한 Hand Gesture 인식에 관한 연구)

  • Chang, Ho-Jung;Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3041-3043
    • /
    • 2000
  • GUI(graphical user interface) has been the dominant platform for HCI(human computer interaction). The GUI-based style of interaction has made computers simpler and easier to use. However GUI will not easily support the range of interaction necessary to meet users' needs that are natural, intuitive, and adaptive. In this paper we study an approach to track a hand in an image sequence and recognize it, in each video frame for replacing the mouse as a pointing device to virtual reality. An algorithm for real time processing is proposed by estimating of the position of the hand and segmentation, considering the orientation of motion and color distribution of hand region.

  • PDF