• Title/Summary/Keyword: Human-computer interface

Search Result 506, Processing Time 0.039 seconds

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

A Study of Hypermedia System Based on Human- Computer Interface Perspective (이용자-컴퓨터 인터페이스 측면에서 고찰한 하이퍼미디어 시스템)

  • 김미진
    • Journal of the Korean Society for information Management
    • /
    • v.9 no.2
    • /
    • pp.154-180
    • /
    • 1992
  • This article examined an overview of historical background and human factors issues/topics in the area of human-computer interface and identified various aspects of hypermedia systems such as concepts, characteristics, representative softwares. The integration of other information systems with hypermedia system for the improvement of user interface was investigated.

  • PDF

A Brain-Computer Interface Based Human-Robot Interaction Platform (Brain-Computer Interface 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7508-7512
    • /
    • 2015
  • We propose a brain-machine interface(BMI) based human-robot interaction(HRI) platform which operates machines by interfacing intentions by capturing brain waves. Platform consists of capture, processing/mapping, and action parts. A noninvasive brain wave sensor, PC, and robot-avatar/LED/motor are selected as capture, processing/mapping, and action part(s), respectively. Various investigations to ensure the relations between intentions and brainwave sensing have been explored. Case studies-an interactive game, on-off controls of LED(s), and motor control(s) are presented to show the design and implementation process of new BMI based HRI platform.

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF

A Multi Modal Interface for Mobile Environment (모바일 환경에서의 Multi Modal 인터페이스)

  • Seo, Yong-Won;Lee, Beom-Chan;Lee, Jun-Hun;Kim, Jong-Phil;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.666-671
    • /
    • 2006
  • 'Multi modal 인터페이스'란 인간과 기계의 통신을 위해 음성, 키보드, 펜을 이용, 인터페이스를 하는 방법을 말한다. 최근 들어 많은 휴대용 단말기가 보급 되고, 단말기가 소형화, 지능화 되어가고, 단말기의 어플리케이션도 다양해짐에 따라 사용자가 보다 편리하고 쉽게 사용할 수 있는 입력 방법에 기대치가 높아가고 있다. 현재 휴대용 단말기에 가능한 입력장치는 단지 단말기의 버튼이나 터치 패드(PDA 경우)이다. 하지만 장애인의 경우 버튼이나 터치 패드를 사용하기 어렵고, 휴대용 단말기로 게임을 하는데 있어서도, 어려움이 많으며 새로운 게임이나 어플리케이션 개발에도 많은 장애요인이 되고 있다. 이런 문제점들은 극복하기 위하여, 본 논문에서는 휴대용 단말기의 새로운 Multi Modal 인터페이스를 제시 하였다. PDA(Personal Digital Assistants)를 이용하여 더 낳은 재미와 실감을 줄 수 있는 Multi Modal 인터페이스를 개발하였다. 센서를 이용하여 휴대용 단말기를 손목으로 제어를 가능하게 함으로서, 사용자에게 편리하고 색다른 입력 장치를 제공 하였다. 향후 음성 인식 기능이 추가 된다면, 인간과 인간 사이의 통신은 음성과 제스처를 이용하듯이 기계에서는 전통적으로 키보드 나 버튼을 사용하지 않고 인간처럼 음성과 제스처를 통해 통신할 수 있을 것이다. 또한 여기에 진동자를 이용하여 촉감을 부여함으로써, 그 동안 멀티 모달 인터페이스에 소외된 시각 장애인, 노약자들에게도 정보를 제공할 수 있다. 실제로 사람은 시각이나 청각보다 촉각에 훨씬 빠르게 반응한다. 이 시스템을 게임을 하는 사용자한테 적용한다면, 능동적으로 게임참여 함으로서 좀더 실감나는 재미를 제공할 수 있다. 특수한 상황에서는 은밀한 정보를 제공할 수 있으며, 앞으로 개발될 모바일 응용 서비스에 사용될 수 있다.

  • PDF

A Study on Vision Based Gesture Recognition Interface Design for Digital TV (동작인식기반 Digital TV인터페이스를 위한 지시동작에 관한 연구)

  • Kim, Hyun-Suk;Hwang, Sung-Won;Moon, Hyun-Jung
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.257-268
    • /
    • 2007
  • The development of Human Computer Interface has been relied on the development of technology. Mice and keyboards are the most popular HCI devices for personal computing. However, device-based interfaces are quite different from human to human interaction and very artificial. To develop more intuitive interfaces which mimic human to human interface has been a major research topic among HCI researchers and engineers. Also, technology in the TV industry has rapidly developed and the market penetration rate for big size screen TVs has increased rapidly. The HDTV and digital TV broadcasting are being tested. These TV environment changes require changes of Human to TV interface. A gesture recognition-based interface with a computer vision system can replace the remote control-based interface because of its immediacy and intuitiveness. This research focuses on how people use their hands or arms for command gestures. A set of gestures are sampled to control TV set up by focus group interviews and surveys. The result of this paper can be used as a reference to design a computer vision based TV interface.

  • PDF

Making a comparison study on Usability of the Computer Aided Idea Generation System -Focused on the User Interface of the Creative Group thinking System(CGTS)- (컴퓨터 지원 발상시스템의 사용성 비교 -CGTS(Creative Group Thinking System) UI를 중심으로-)

  • 정승호;한경돈
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.4
    • /
    • pp.57-62
    • /
    • 2003
  • At the beginning stage of design process, the concept design is required to equip the creative idea thinking and exerts critical effect on the success of production. To support the idea thinking process at the stage of concept design, web-based Creative Group Thinking System(CGTS) was developed. In this vein, the purpose of this study is to investigate the significance of HCI(Human Computer Interface) and UI(User Interface) and to find the way to increase the applicability of the UI of CGTS.

  • PDF

A Research on EEG Synchronization of Movement Cognition for Brain Computer Interface (뇌 컴퓨터 인터페이스를 위한 뇌파와 동작 인지와의 동기화에 관한 연구)

  • Whang, Min-Cheol;Kim, Kyu-Tae;Goh, Sang-Tae;Jeong, Byung-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.167-171
    • /
    • 2007
  • Brain computer interface is the technology of interface for next generation. Recently, user intention has been tried to be recognized for interfacing a computer. EEG plays important role in developing practical application in this area. Much research has focused on extracting EEG commander generated by human movement. ERD/ERS has generally accepted as important EEG parameters for prediction of human movement. However, There has been difference between initial movement indicated by ERD/ERS and real movement. Therefore, this study was to determine the time differences for brain interface by ERD/ERS. Five university students performed ten repetitive movements. ERD/ERS was determined according to movement execution and the significant pattern showed the difference between movement execution and movement indication of ERD/ERS.

Technology Requirements for Wearable User Interface

  • Cho, Il-Yeon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.531-540
    • /
    • 2015
  • Objective: The objective of this research is to investigate the fundamentals of human computer interaction for wearable computers and derive technology requirements. Background: A wearable computer can be worn anytime with the support of unrestricted communications and a variety of services which provide maximum capability of information use. Key challenges in developing such wearable computers are the level of comfort that users do not feel what they wear, and easy and intuitive user interface. The research presented in this paper examines user interfaces for wearable computers. Method: In this research, we have classified the wearable user interface technologies and analyzed the advantages and disadvantages from the user's point of view. Based on this analysis, we issued a user interface technology to conduct research and development for commercialization. Results: Technology requirements are drawn to make wearable computers commercialized. Conclusion: The user interface technology for wearable system must start from the understanding of the ergonomic aspects of the end user, because users wear the system on their body. Developers do not try to develop a state-of-the-art technology without the requirement analysis of the end users. If people do not use the technology, it can't survive in the market. Currently, there is no dominant wearable user interface in the world. So, this area might try a new challenge for the technology beyond the traditional interface paradigm through various approaches and attempts. Application: The findings in this study are expected to be used for designing user interface for wearable systems, such as digital clothes and fashion apparel.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF