• 제목/요약/키워드: Human interaction

검색결과 2,463건 처리시간 0.031초

Comprehensive architecture for intelligent adaptive interface in the field of single-human multiple-robot interaction

  • Ilbeygi, Mahdi;Kangavari, Mohammad Reza
    • ETRI Journal
    • /
    • 제40권4호
    • /
    • pp.483-498
    • /
    • 2018
  • Nowadays, with progresses in robotic science, the design and implementation of a mechanism for human-robot interaction with a low workload is inevitable. One notable challenge in this field is the interaction between a single human and a group of robots. Therefore, we propose a new comprehensive framework for single-human multiple-robot remote interaction that can form an efficient intelligent adaptive interaction (IAI). Our interaction system can thoroughly adapt itself to changes in interaction context and user states. Some advantages of our devised IAI framework are lower workload, higher level of situation awareness, and efficient interaction. In this paper, we introduce a new IAI architecture as our comprehensive mechanism. In order to practically examine the architecture, we implemented our proposed IAI to control a group of unmanned aerial vehicles (UAVs) under different scenarios. The results show that our devised IAI framework can effectively reduce human workload and the level of situation awareness, and concurrently foster the mission completion percentage of the UAVs.

공간에서의 인터랙션 디자인 개념 적용에 대한 연구 (A Study on Applying the Concepts of Interaction Design to Space)

  • 강성중;권영걸
    • 한국실내디자인학회논문집
    • /
    • 제14권3호
    • /
    • pp.234-242
    • /
    • 2005
  • Interface is a medium or channel to communicate between human and things, while interaction is the manner of communication between them. Interaction design is designing experience of user through the interaction process for human, thing, system, and space. Richard Buchanan suggests four kinds of interaction: interface (person to thing interaction), transaction (person to person interaction), human interaction (human and environment interaction) and participation (human to cosmos interaction). With digital technology, architecture and space design have made various experiments at form, function, and content of space. Space evolves from a physical container to a stage to provide narrative and create new experience to users. Since understanding users, creating experience, efficient space design, content planning, and applicable technology are required for interaction design in space, multi-disciplinary research and cooperation is needed.

POMDP 기반 사용자-로봇 인터랙션 행동 모델 (POMDP-based Human-Robot Interaction Behavior Model)

  • 김종철
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.599-605
    • /
    • 2014
  • This paper presents the interactive behavior modeling method based on POMDP (Partially Observable Markov Decision Process) for HRI (Human-Robot Interaction). HRI seems similar to conversational interaction in point of interaction between human and a robot. The POMDP has been popularly used in conversational interaction system. The POMDP can efficiently handle uncertainty of observable variables in conversational interaction system. In this paper, the input variables of the proposed conversational HRI system in POMDP are the input information of sensors and the log of used service. The output variables of system are the name of robot behaviors. The robot behavior presents the motion occurred from LED, LCD, Motor, sound. The suggested conversational POMDP-based HRI system was applied to an emotional robot KIBOT. In the result of human-KIBOT interaction, this system shows the flexible robot behavior in real world.

이어핀 삽입 자동화 시스템을 위한 템플릿 매칭 기반 홀 판별 방법 (Hole Identification Method Based on Template Matching for Ear Pins Insertion Automation System)

  • 백종환;이재열;정명수;장민우;신동호;서갑호;홍성호
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 춘계학술발표대회
    • /
    • pp.330-333
    • /
    • 2020
  • 장신구 산업은 인건비의 비중이 높고 노동자의 역량에 따라 제품의 제작 작업 시간 및 품질의 편차가 심하다. 이에 산업계의 수요에 맞추어 실리콘 금형 표면 지름 0.75mm 홀에 이어핀을 삽입하는 공정을 자동화하기 위하여 삽입 자동화 시스템이 개발되고 있다. 본 논문에서는 이어핀 삽입 자동화시스템에서 적용할 수 있는 템플릿 매칭 방법과 관심 영역 레이블링을 통한 홀 판별 방법을 제안한다. 제안한 방법의 안정성을 확보하기 위하여 실험을 통해 최적의 매칭 방법과 이진화 기법을 적용하였으며 이어핀 홀의 좌표를 확보하여 X-Y 정밀 이송 시스템에 적용할 수 있다.

인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’ (A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction)

  • 황정훈;권동수
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

사회 학습 기제로서 IJA와 RJA의 비교: 인간-아바타 머리/손 상호작용을 이용한 연구 (Comparing Initiating and Responding Joint Attention as a Social Learning Mechanism: A Study Using Human-Avatar Head/Hand Interaction)

  • 김민규;김소연;김광욱
    • 정보과학회 논문지
    • /
    • 제43권6호
    • /
    • pp.645-652
    • /
    • 2016
  • 사회학습기제(Social learning mechanisms) 가운데 공동주의(Joint Attention; JA)는 인간의 학습에 핵심적 역할을 하는 것으로 알려져 왔다. 그러나 기존 인간 대 인간 상호작용 연구 방법론으로는 JA 요소 별 상호작용(예: 손 또는 머리)에 대한 연구가 어려웠다. 본 연구에서는 인간-아바타 상호작용(Human-Avatar Interaction)과 가상현실(Virtual Reality) 방법론을 시험적으로 적용하여 JA 요소 별 상호작용을 연구할 수 있는 프로그램을 만들고, 이를 활용하여 정상군의 상호작용에 대하여 2회에 걸친 실험으로 연구하였다. 본 연구 결과에 따르면 상호작용 방법론(손 또는 머리)에 상관없이 능동적 JA(Initiating JA: IJA)가 반응적 JA (Responding JA: RJA)보다 정보처리를 촉진시켰으며, 손을 이용한 상호작용이 머리를 이용한 상호작용 보다 상대적으로 정보처리에 영향을 많이 주는 것을 확인하였다. 본 연구 결과에 대한 잠재적 해석과 연구로써의 한계에 대하여 본 논문에서 논의하였다.

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

유사홀로그램 가시화 기반 가상 휴먼의 제스쳐 상호작용 영향 분석 (Exploring the Effects of Gesture Interaction on Co-presence of a Virtual Human in a Hologram-like System)

  • Kim, Daewhan;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제24권10호
    • /
    • pp.1390-1393
    • /
    • 2020
  • Recently, a hologram-like system and a virtual human to provide a realistic experience has been serviced in various places such as musical performance and museum exhibition. Also, the realistically responded virtual human in the hologram-like system need to be expressed in a way that matches the users' interaction. In this paper, to improve the feeling of being in the same space with a virtual human in the hologram-like system, user's gesture based interactive contents were presented, and the effectiveness of interaction was evaluated. Our approach was found that the gesture based interaction was provided a higher sense of co-presence for immersion with the virtual human.

멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계 (Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction)

  • 임미정;박범
    • 대한인간공학회지
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.