• 제목/요약/키워드: Human Automation Interaction

검색결과 35건 처리시간 0.027초

Automatic Gesture Recognition for Human-Machine Interaction: An Overview

  • Nataliia, Konkina
    • International Journal of Computer Science & Network Security
    • /
    • 제22권1호
    • /
    • pp.129-138
    • /
    • 2022
  • With the increasing reliance of computing systems in our everyday life, there is always a constant need to improve the ways users can interact with such systems in a more natural, effective, and convenient way. In the initial computing revolution, the interaction between the humans and machines have been limited. The machines were not necessarily meant to be intelligent. This begged for the need to develop systems that could automatically identify and interpret our actions. Automatic gesture recognition is one of the popular methods users can control systems with their gestures. This includes various kinds of tracking including the whole body, hands, head, face, etc. We also touch upon a different line of work including Brain-Computer Interface (BCI), Electromyography (EMG) as potential additions to the gesture recognition regime. In this work, we present an overview of several applications of automated gesture recognition systems and a brief look at the popular methods employed.

A Human Robot Interactive System "RoJi"

  • Shim, Inbo;Yoon, Joongsun;Yoh, Myeungsook
    • International Journal of Control, Automation, and Systems
    • /
    • 제2권3호
    • /
    • pp.398-405
    • /
    • 2004
  • A human-friendly interactive system that is based on the harmonious symbiotic coexistence of humans and robots is explored. Based on the interactive technology paradigm, a robotic cane is proposed for blind or visually impaired pedestrians to navigate safely and quickly through obstacles and other hazards. Robotic aids, such as robotic canes, require cooperation between humans and robots. Various methods for implementing the appropriate cooperative recognition, planning, and acting, have been investigated. The issues discussed include the interaction between humans and robots, design issues of an interactive robotic cane, and behavior arbitration methodologies for navigation planning.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Honeypot game-theoretical model for defending against APT attacks with limited resources in cyber-physical systems

  • Tian, Wen;Ji, Xiao-Peng;Liu, Weiwei;Zhai, Jiangtao;Liu, Guangjie;Dai, Yuewei;Huang, Shuhua
    • ETRI Journal
    • /
    • 제41권5호
    • /
    • pp.585-598
    • /
    • 2019
  • A cyber-physical system (CPS) is a new mechanism controlled or monitored by computer algorithms that intertwine physical and software components. Advanced persistent threats (APTs) represent stealthy, powerful, and well-funded attacks against CPSs; they integrate physical processes and have recently become an active research area. Existing offensive and defensive processes for APTs in CPSs are usually modeled by incomplete information game theory. However, honeypots, which are effective security vulnerability defense mechanisms, have not been widely adopted or modeled for defense against APT attacks in CPSs. In this study, a honeypot game-theoretical model considering both low- and high-interaction modes is used to investigate the offensive and defensive interactions, so that defensive strategies against APTs can be optimized. In this model, human analysis and honeypot allocation costs are introduced as limited resources. We prove the existence of Bayesian Nash equilibrium strategies and obtain the optimal defensive strategy under limited resources. Finally, numerical simulations demonstrate that the proposed method is effective in obtaining the optimal defensive effect.

A Study on Infra-Technology of RCP Interaction System

  • Kim, Seung-Woo;Choe, Jae-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1121-1125
    • /
    • 2004
  • The RT(Robot Technology) has been developed as the next generation of a future technology. According to the 2002 technical report from Mitsubishi R&D center, IT(Information Technology) and RT(Robotic Technology) fusion system will grow five times larger than the current IT market at the year 2015. Moreover, a recent IEEE report predicts that most people will have a robot in the next ten years. RCP(Robotic Cellular Phone), CP(Cellular Phone) having personal robot services, will be an intermediate hi-tech personal machine between one CP a person and one robot a person generations. RCP infra consists of $RCP^{Mobility}$, $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP interaction system is really focused in this paper. The $RCP^{interaction}$(Robotic Cellular Phone for Interaction) is to be developed as an emotional model CP as shown in figure 1. $RCP^{interaction}$ refers to the sensitivity expression and the link technology of communication of the CP. It is interface technology between human and CP through various emotional models. The interactive emotion functions are designed through differing patterns of vibrator beat frequencies and a feeling system created by a smell injection switching control. As the music influences a person, one can feel a variety of emotion from the vibrator's beats, by converting musical chord frequencies into vibrator beat frequencies. So, this paper presents the definition, the basic theory and experiment results of the RCP interaction system. We confirm a good performance of the RCP interaction system through the experiment results.

  • PDF

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Sensory Feedback for High Dissymmetric Master-Slave Dexterity

  • Cotsaftis, Michel;Keskinen, Erno
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제4권1호
    • /
    • pp.38-42
    • /
    • 2002
  • Conditions are discussed for operating a dissymmetric human master-small (or micro) slave system in best (large position gain-small velocity gain) conditions allowing higher operator dexterity when real effects (joint compliance, link flexion delay and transmission distortion) are taken into account. It is shown that position PD feedback law advantage for ideal case no longer holds, and that more complicated feedback law depending on real effects has to be implemented with adapted transmission line. Drawback is slowdown of master slave interaction, suggesting to use more advanced predictive methods for the master and more intelligent control law for the slave.

Key Technologies in Robot Assistants: Motion Coordination Between a Human and a Mobile Robot

  • Prassler, Erwin;Bank, Dirk;Kluge, Boris
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제4권1호
    • /
    • pp.56-61
    • /
    • 2002
  • In this paper we describe an approach to coordinating the motion of a human with a mobile robot moving in a populated, continuously changing. natural environment. Our test application is a wheelchair accompanying a person through the concourse of a railway station moving side by side with the person. Our approach is based on a method for motion planning amongst moving obstacles known as Velocity Obstacle approach. We extend this method by a method for tracking a virtual target which allows us to vary the robot's heading and velocity with the locomotion of the accompanied person and the state of the surrounding environment.

확장된 페트리네트를 이용한 차량형 군사로봇의 운용자 성능 및 통신장애 영향분석 (Analysis of the Human Performance and Communication Effects on the Operator Tasks of Military Robot Vehicles by Using Extended Petri Nets)

  • 최상영;양지현
    • 한국CDE학회논문집
    • /
    • 제22권2호
    • /
    • pp.162-171
    • /
    • 2017
  • Unmanned military vehicles (UMVs) are most commonly characterized as dealing with dull, dirty, and dangerous tasks with automation. Although most of the UMVs are designed to a high degree of autonomy, the human operator will still intervene in the robot's operation, and teleoperate them to achieve his or her mission. Thus, operator capacity, together with robot autonomy and user interface, is one of the most important design factors in the research and development of the UMVs. Further, communication may affect the operator task performance. In this paper, we analyze the operator performance and the communication effects on the operator performance by using the extended Petri nets, called OTSim nets. The OTSim nets was designed by the authors, being extended using pure Petri nets.