• Title/Summary/Keyword: Human interaction

Search Result 2,460, Processing Time 0.036 seconds

Comprehensive architecture for intelligent adaptive interface in the field of single-human multiple-robot interaction

  • Ilbeygi, Mahdi;Kangavari, Mohammad Reza
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.483-498
    • /
    • 2018
  • Nowadays, with progresses in robotic science, the design and implementation of a mechanism for human-robot interaction with a low workload is inevitable. One notable challenge in this field is the interaction between a single human and a group of robots. Therefore, we propose a new comprehensive framework for single-human multiple-robot remote interaction that can form an efficient intelligent adaptive interaction (IAI). Our interaction system can thoroughly adapt itself to changes in interaction context and user states. Some advantages of our devised IAI framework are lower workload, higher level of situation awareness, and efficient interaction. In this paper, we introduce a new IAI architecture as our comprehensive mechanism. In order to practically examine the architecture, we implemented our proposed IAI to control a group of unmanned aerial vehicles (UAVs) under different scenarios. The results show that our devised IAI framework can effectively reduce human workload and the level of situation awareness, and concurrently foster the mission completion percentage of the UAVs.

A Study on Applying the Concepts of Interaction Design to Space (공간에서의 인터랙션 디자인 개념 적용에 대한 연구)

  • Kang Sung-Joong;Kwon Young-Gull
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.3 s.50
    • /
    • pp.234-242
    • /
    • 2005
  • Interface is a medium or channel to communicate between human and things, while interaction is the manner of communication between them. Interaction design is designing experience of user through the interaction process for human, thing, system, and space. Richard Buchanan suggests four kinds of interaction: interface (person to thing interaction), transaction (person to person interaction), human interaction (human and environment interaction) and participation (human to cosmos interaction). With digital technology, architecture and space design have made various experiments at form, function, and content of space. Space evolves from a physical container to a stage to provide narrative and create new experience to users. Since understanding users, creating experience, efficient space design, content planning, and applicable technology are required for interaction design in space, multi-disciplinary research and cooperation is needed.

POMDP-based Human-Robot Interaction Behavior Model (POMDP 기반 사용자-로봇 인터랙션 행동 모델)

  • Kim, Jong-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.599-605
    • /
    • 2014
  • This paper presents the interactive behavior modeling method based on POMDP (Partially Observable Markov Decision Process) for HRI (Human-Robot Interaction). HRI seems similar to conversational interaction in point of interaction between human and a robot. The POMDP has been popularly used in conversational interaction system. The POMDP can efficiently handle uncertainty of observable variables in conversational interaction system. In this paper, the input variables of the proposed conversational HRI system in POMDP are the input information of sensors and the log of used service. The output variables of system are the name of robot behaviors. The robot behavior presents the motion occurred from LED, LCD, Motor, sound. The suggested conversational POMDP-based HRI system was applied to an emotional robot KIBOT. In the result of human-KIBOT interaction, this system shows the flexible robot behavior in real world.

Hole Identification Method Based on Template Matching for Ear Pins Insertion Automation System (이어핀 삽입 자동화 시스템을 위한 템플릿 매칭 기반 홀 판별 방법)

  • Baek, Jonghwan;Lee, Jaeyoul;Jung, Myungsoo;Jang, Minwoo;Shin, Dongho;Seo, Kapho;Hong, Sungho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.330-333
    • /
    • 2020
  • 장신구 산업은 인건비의 비중이 높고 노동자의 역량에 따라 제품의 제작 작업 시간 및 품질의 편차가 심하다. 이에 산업계의 수요에 맞추어 실리콘 금형 표면 지름 0.75mm 홀에 이어핀을 삽입하는 공정을 자동화하기 위하여 삽입 자동화 시스템이 개발되고 있다. 본 논문에서는 이어핀 삽입 자동화시스템에서 적용할 수 있는 템플릿 매칭 방법과 관심 영역 레이블링을 통한 홀 판별 방법을 제안한다. 제안한 방법의 안정성을 확보하기 위하여 실험을 통해 최적의 매칭 방법과 이진화 기법을 적용하였으며 이어핀 홀의 좌표를 확보하여 X-Y 정밀 이송 시스템에 적용할 수 있다.

A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction (인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’)

  • Hwang, Jung-Hoon;Kwon, Dong-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

Comparing Initiating and Responding Joint Attention as a Social Learning Mechanism: A Study Using Human-Avatar Head/Hand Interaction (사회 학습 기제로서 IJA와 RJA의 비교: 인간-아바타 머리/손 상호작용을 이용한 연구)

  • Kim, Mingyu;Kim, So-Yeon;Kim, Kwanguk
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.645-652
    • /
    • 2016
  • Joint Attention (JA) has been known to play a key role in human social learning. However, relative impact of different interaction types has yet to be rigorously examined because of limitation of existing methodologies to simulate human-to-human interaction. In the present study, we designed a new JA paradigm with emulating human-avatar interaction and virtual reality technologies, and tested the paradigm in two experiments with healthy adults. Our results indicated that initiating JA (IJA) condition was more effective than responding JA (RJA) condition for social learning in both head and hand interactions. Moreover, the hand interaction involved better information processing than the head interaction. The implication of the results, the validity of the new paradigm, and limitations of this study were discussed.

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Exploring the Effects of Gesture Interaction on Co-presence of a Virtual Human in a Hologram-like System (유사홀로그램 가시화 기반 가상 휴먼의 제스쳐 상호작용 영향 분석)

  • Kim, Daewhan;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1390-1393
    • /
    • 2020
  • Recently, a hologram-like system and a virtual human to provide a realistic experience has been serviced in various places such as musical performance and museum exhibition. Also, the realistically responded virtual human in the hologram-like system need to be expressed in a way that matches the users' interaction. In this paper, to improve the feeling of being in the same space with a virtual human in the hologram-like system, user's gesture based interactive contents were presented, and the effectiveness of interaction was evaluated. Our approach was found that the gesture based interaction was provided a higher sense of co-presence for immersion with the virtual human.

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.