• 제목/요약/키워드: Human/System Interface

검색결과 783건 처리시간 0.024초

눈으로 조종하는 인간/컴퓨터 인터페이스 (Eye as a Human/Computer Interface Device)

  • 박경수;이경태
    • 대한인간공학회:학술대회논문집
    • /
    • 대한인간공학회 1996년도 춘계학술대회논문집
    • /
    • pp.36-47
    • /
    • 1996
  • By integrating the eye head-position monitioring devices, the present authors developed an eye-controlled human/computer interface based on the line-of-sight and an intentional blink to invoke commands. Also modified was an existing calibration method to reduce the visual angle between the target center and the intersection point of the derived line-of-sight. This modified calibration method allowed 108 or more command blocks to be displayed on the 14 inch monitor with the target acquisition probability(hit rate) of 98% when viewed at the distance of 500 mm apart. An active triggering method using an intentional blink was proposed and was shown to be a feasible and efficient alternative to invoke commands with total triggering time of 0.8 sec or less. The system could be used by the normal people as well as the handicapped individuals as a new human/computer interface.

  • PDF

Comprehensive architecture for intelligent adaptive interface in the field of single-human multiple-robot interaction

  • Ilbeygi, Mahdi;Kangavari, Mohammad Reza
    • ETRI Journal
    • /
    • 제40권4호
    • /
    • pp.483-498
    • /
    • 2018
  • Nowadays, with progresses in robotic science, the design and implementation of a mechanism for human-robot interaction with a low workload is inevitable. One notable challenge in this field is the interaction between a single human and a group of robots. Therefore, we propose a new comprehensive framework for single-human multiple-robot remote interaction that can form an efficient intelligent adaptive interaction (IAI). Our interaction system can thoroughly adapt itself to changes in interaction context and user states. Some advantages of our devised IAI framework are lower workload, higher level of situation awareness, and efficient interaction. In this paper, we introduce a new IAI architecture as our comprehensive mechanism. In order to practically examine the architecture, we implemented our proposed IAI to control a group of unmanned aerial vehicles (UAVs) under different scenarios. The results show that our devised IAI framework can effectively reduce human workload and the level of situation awareness, and concurrently foster the mission completion percentage of the UAVs.

HUMAN-MACHINE INTERACTION IN NUCLEAR POWER PLANTS

  • YOSHIKAWA HIDEKAZU
    • Nuclear Engineering and Technology
    • /
    • 제37권2호
    • /
    • pp.151-158
    • /
    • 2005
  • Advanced nuclear power plants are generally large complex systems automated by computers. Whenever a rare plant emergency occurs the plant operators must cope with the emergency under severe mental stress without committing any fatal errors. Furthermore, The operators must train to improve and maintain their ability to cope with every conceivable situation, though it is almost impossible to be fully prepared for an infinite variety of situations. In view of the limited capability of operators in emergency situations, there has been a new approach to preventing the human error caused by improper human-machine interaction. The new approach has been triggered by the introduction of advanced information systems that help operators recognize and counteract plant emergencies. In this paper, the adverse effect of automation in human-machine systems is explained. The discussion then focuses on how to configure a joint human-machine system for ideal human-machine interaction. Finally, there is a new proposal on how to organize technologies that recognize the different states of such a joint human-machine system.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

상상 움직임에 대한 실시간 뇌전도 뇌 컴퓨터 상호작용, 큐 없는 상상 움직임에서의 뇌 신호 분류 (Real-time BCI for imagery movement and Classification for uncued EEG signal)

  • 강성욱;전성찬
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.642-645
    • /
    • 2009
  • Brain Computer Interface (BCI) is a communication pathway between devices (computers) and human brain. It treats brain signals in real-time basis and discriminates some information of what human brain is doing. In this work, we develop a EEG BCI system using a feature extraction such as common spatial pattern (CSP) and a classifier using Fisher linear discriminant analysis (FLDA). Two-class EEG motor imagery movement datasets with both cued and uncued are tested to verify its feasibility.

  • PDF

Human Robot Interaction via Evolutionary Network Intelligence

  • Yamaguchi, Toru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.49.2-49
    • /
    • 2002
  • This paper describes the configuration of a multi-agent system that can recognize human intentions. This system constructs ontologies of human intentions and enables knowledge acquisition and sharing between intelligent agents operating in different environments. This is achieved by using a bi-directional associative memory network. The process of intention recognition is based on fuzzy association inferences. This paper shows the process of information sharing by using ontologies. The purpose of this research is to create human-centered systems that can provide a natural interface in their interaction with people.

  • PDF

인체 능력 향상을 위한 하지 외골격 시스템의 기술 동향 (Technical Trend of the Lower Limb Exoskeleton System for the Performance Enhancement)

  • 이희돈;한창수
    • 제어로봇시스템학회논문지
    • /
    • 제20권3호
    • /
    • pp.364-371
    • /
    • 2014
  • The purpose of this paper is to review recent developments in lower limb exoskeletons. The exoskeleton system is a human-robot cooperation system that enhances the performance of the wearer in various environments while the human operator is in charge of the position control, contextual perception, and motion signal generation through the robot's artificial intelligence. This system is in the form of a mechanical structure that is combined to the exterior of a human body to improve the muscular power of the wearer. This paper is followed by an overview of the development history of exoskeleton systems and their three main applications in military/industrial field, medical/rehabilitation field and social welfare field. Besides the key technologies in exoskeleton systems, the research is presented from several viewpoints of the exoskeleton mechanism, human-robot interface and human-robot cooperation control.

음성인식용 인터페이스의 사용편의성 평가 방법론 (A Usability Evaluation Method for Speech Recognition Interfaces)

  • 한성호;김범수
    • 대한인간공학회지
    • /
    • 제18권3호
    • /
    • pp.105-125
    • /
    • 1999
  • As speech is the human being's most natural communication medium, using it gives many advantages. Currently, most user interfaces of a computer are using a mouse/keyboard type but the interface using speech recognition is expected to replace them or at least be used as a tool for supporting it. Despite the advantages, the speech recognition interface is not that popular because of technical difficulties such as recognition accuracy and slow response time to name a few. Nevertheless, it is important to optimize the human-computer system performance by improving the usability. This paper presents a set of guidelines for designing speech recognition interfaces and provides a method for evaluating the usability. A total of 113 guidelines are suggested to improve the usability of speech-recognition interfaces. The evaluation method consists of four major procedures: user interface evaluation; function evaluation; vocabulary estimation; and recognition speed/accuracy evaluation. Each procedure is described along with proper techniques for efficient evaluation.

  • PDF

스포츠화에 대한 소비자의 감성 DB 및 Interface 구축에 관한 연구 (A Study on the Customers Emotional DB and Interface Design for Sports Shoes)

  • 윤훈용;임기용
    • 산업경영시스템학회지
    • /
    • 제25권3호
    • /
    • pp.34-40
    • /
    • 2002
  • In the past, the important factors of buying the sports shoes for the customers were price and comfort. However these days, the sports shoes are considered as a part of fashion and may not satisfy the customers because their emotional preference have not been properly considered in design phase. The customers' desire and expectation for unique design are growing. Thus, the development of sports shoes not only considering the anthropometric foot characteristics but also satisfying the customers emotional preference is needed. In this study, the basic data on the customer's emotional preference to the design of sports shoes were obtained using human sensibility ergonomics approach and formed a data base. Also, we developed ail interface that can be used for the customers to select the emotionally preferred sports shoes.

Man-machine interface using eyeball movement

  • Takami, Osamu;Morimoto, Kazuaki;Ochiai, Tsumoru;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1995년도 Proceedings of the Korea Automation Control Conference, 10th (KACC); Seoul, Korea; 23-25 Oct. 1995
    • /
    • pp.195-198
    • /
    • 1995
  • In this paper We propose one computer interface device for handicapped people. Input signals of the interface device are movements of eyeballs and head of handicapped. The movements of the eyeballs and head are detected by an image processing system. One feature of our system is that the operator is not obliged to wear any burdensome device like glasses and a helmet. The sensing performance of the image processing of the eyeballs and head is evaluated through experiments. Experimental results reveal the applicability of our system.

  • PDF