• Title/Summary/Keyword: Human-robot interface

Search Result 150, Processing Time 0.027 seconds

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Male, Female, or Robot?: Effects of Task Type and User Gender on Expected Gender of Chatbots (태스크 특성 및 사용자 성별이 챗봇의 기대 성별에 미치는 효과에 관한 연구)

  • Kim, Soomin;Lee, Seo-Young;Lee, Joonhwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.320-327
    • /
    • 2021
  • We aim to investigate the effects of task type and user gender on the expected gender of chatbots. We conducted an online study of 381 participants who selected the gender (female, male, or neutral) for chabots performing six different tasks. Our results indicate that users expect human- gendered chatbots for all tasks and that the expected gender of a chatbot is significantly different depending on the task type. Users expected chatting, counseling, healthcare and clerical work to be done by female chatbots; professional and customer service work were expected to be done by male chatbots. A tendency for participants to prefer chatbots of the same-gendered as themselves is revealed in several tasks for both male and female users. However, this homophily tendency is stronger for female users. We conclude by suggesting practical guidelines for designing chatbot services that reflect user expectations.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Development of a Wall-climbing Welding Robot for Draft Mark on the Curved Surface (선수미 흘수마크 용접을 위한 벽면이동로봇 개발)

  • Lee, Jae-Chang;Kim, Ho-Gu;Kim, Se-Hwan;Ryu, Sin-Wook
    • Special Issue of the Society of Naval Architects of Korea
    • /
    • 2006.09a
    • /
    • pp.112-121
    • /
    • 2006
  • The vertical displacement of a ship on the basis of the sea level is an important parameter for its stability and control. To indicate the displacement on operating conditions, "draft marks" are carved on the hull of the ship in various ways. One of the methods is welding. The position, shape and size of the marks are specified on the shipbuilding rules by classification societies to be checked by shipbuilders. In most cases, high-skilled workers do the welding along the drawing for the marks and welding bead becomes the marks. But the inaccuracies due to human errors and high labor cost increase the needs for automating the work process of the draft marks. In the preceding work, an indoor robot was developed for automatic marking system on flat surfaces and the work proved that the robot welding was more effective and accurate than manual welding. However, many parts of the hull structure constructed at the outdoor are cowed shapes, which is beyond the capability of the robot developed for the indoor works on the flat surface. The marking on the curved steel surface requiring the 25m elevations is one of the main challenges to the conventional robots. In the present paper, the robot capable of climbing vertical curved steel surfaces and performing the welding at the marked position by effectively solving the problems mentioned earlier is presented.

  • PDF

A Multi Small Humanoid Robot Control for Efficient Robot Performance (효율적인 전시공연을 위한 멀티 소형 휴머노이드 로봇제어)

  • Jang, Jun-Young;Lin, Chi-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8933-8939
    • /
    • 2015
  • In this paper, we designed a multi humanoid robot control method for performing an exhibition that will maximize the efficiency and user convenience and implementation. In recent years, an increasing number of case and to take advantage of the robots in the field performances and exhibitions, plays, musicals, orchestra performances are also various genres. In concert with the existing small humanoid exhibition to source from outside by using a computer and MP3 player and play, while pressing the start button of communication equipment for the show to start the robot began performing with the zoom. Thus, due to the dual source and robot operation and synchronization does not work well is the synchronization of the start of the concert sound starting point of the robot and robot motion and sound are reproduced separately were frequently occurs when you need to restart the show. In addition, when the center of gravity or lose the robots who were present during the performance problems such as performance or intervene to restart the show. In order to overcome this, in this paper, Multi-small humanoid robot was designed to control the efficiency and the user of the GUI-based human interface S/W to maximize convenience, Zigbee communication to transmit a plurality of data in al small humanoid It was used. In addition, targeting a number of the small humanoid robot demonstrated the effectiveness and validity of the user's convenience by gender actual implementation.

Robust Speech Endpoint Detection in Noisy Environments for HRI (Human-Robot Interface) (인간로봇 상호작용을 위한 잡음환경에 강인한 음성 끝점 검출 기법)

  • Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.147-156
    • /
    • 2013
  • In this paper, a new speech endpoint detection method in noisy environments for moving robot platforms is proposed. In the conventional method, the endpoint of speech is obtained by applying an edge detection filter that finds abrupt changes in the feature domain. However, since the feature of the frame energy is unstable in such noisy environments, it is difficult to accurately find the endpoint of speech. Therefore, a novel feature extraction method based on the twice-iterated fast fourier transform (TIFFT) and statistical models of speech is proposed. The proposed feature extraction method was applied to an edge detection filter for effective detection of the endpoint of speech. Representative experiments claim that there was a substantial improvement over the conventional method.

Networked Robots using ATLAS Service-Oriented Architecture in the Smart Spaces

  • Helal, Sumi;Bose, Raja;Lim, Shin-Young;Kim, Hyun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.288-298
    • /
    • 2008
  • We introduce new type of networked robot, Ubiquitous Robotic Companion (URC), embedded with ATLAS Service-oriented architecture for enhancing the space sensing capability. URC is a network-based robotic system developed by ETRI. For years of experience in deploying service with ATLAS sensor platform for elder and people with special needs in smart houses, we need networked robots to assist elder people in their successful daily living. Recently, pervasive computing technologies reveals possibilities of networked robots in smart spaces, consist of sensors, actuators and smart devices can collaborate with the other networked robot as a mobile sensing platform, a complex and sophisticated actuator and a human interface. This paper provides our experience in designing and implementing system architecture to integrate URC robots in pervasive computing environments using the University of Florida's ATLAS service-oriented architecture. In this paper, we focus on the integrated framework architecture of URC embedded with ATLAS platform. We show how the integrated URC system is enabled to provide better services which enhance the space sensing of URC in the smart space by applying service-oriented architecture characterized as flexibility in adding or deleting service components of Ubiquitous Robotic Companion.

A Robotcar-based Proof of Concept Model System for Dilemma Zone Decision Support Service (딜레마구간 의사결정 지원 서비스를 위한 로봇카 기반의 개념검증 모형 시스템)

  • Lee, Hyukjoon;Chung, Young-Uk;Lee, Hyungkeun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.4
    • /
    • pp.57-62
    • /
    • 2014
  • Recently, research activities to develop services for providing safety information to the drivers in fast moving vehicles based on various wireless network technologies such as DSRC (Dedicated Short Range Communication), IEEE 802.11p WAVE (Wireless Access for Vehicular Environment) are widely being carried out. This paper presents a proof-of-concept model based on a robot-car for Dilemma Zone Decision Assistant Service using the wireless LAN technology. The proposed model system consists of a robot-car based on an embedded Linux OS equipped with a WiFi interface and an on-board unit emulator, an Android-based remote controller to model a human driver interface, a laptop computer to run a model traffic signal controller and signal lights, and a WiFi access point to model a road-side unit.

Development of a Web Platform System for Worker Protection using EEG Emotion Classification (뇌파 기반 감정 분류를 활용한 작업자 보호를 위한 웹 플랫폼 시스템 개발)

  • Ssang-Hee Seo
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.37-44
    • /
    • 2023
  • As a primary technology of Industry 4.0, human-robot collaboration (HRC) requires additional measures to ensure worker safety. Previous studies on avoiding collisions between collaborative robots and workers mainly detect collisions based on sensors and cameras attached to the robot. This method requires complex algorithms to continuously track robots, people, and objects and has the disadvantage of not being able to respond quickly to changes in the work environment. The present study was conducted to implement a web-based platform that manages collaborative robots by recognizing the emotions of workers - specifically their perception of danger - in the collaborative process. To this end, we developed a web-based application that collects and stores emotion-related brain waves via a wearable device; a deep-learning model that extracts and classifies the characteristics of neutral, positive, and negative emotions; and an Internet-of-things (IoT) interface program that controls motor operation according to classified emotions. We conducted a comparative analysis of our system's performance using a public open dataset and a dataset collected through actual measurement, achieving validation accuracies of 96.8% and 70.7%, respectively.

A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller (상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구)

  • Kim, Ki-Chang;Kang, Min-Sung;Ji, Chang-Uk;Ha, Ji-Woo;Sun, Dong-Ik;Xue, Gang;Shin, Kyoo-Sik
    • Journal of Engineering Education Research
    • /
    • v.19 no.1
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.