• Title/Summary/Keyword: Human Tracking

Search Result 652, Processing Time 0.025 seconds

Design of Fuzzy Model-based Multi-objective Controller and Its Application to MAGLEV ATO system (퍼지 모델 기반 다목적 제어기의 설계와 자기부상열차 자동운전시스템에의 적용)

  • 강동오;양세현;변증남
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.211-217
    • /
    • 1998
  • Many practical control problems for the complex, uncertain or large-scale plants, need to simultaneously achieve a number of objectives, which may conflict or compete with each other. If the conventional optimization methods are applied to solve these control problems, the solution process may be time-consuming and the resulting solution would ofter lose its original meaning of optimality. Nevertheless, the human operators usually performs satisfactory results based on their qualitative and heuristic knowledge. In this paper, we investigate the control strategies of the human operators, and propose a fuzzy model-based multi-objective satisfactory controller. We also apply it to the automatic train operation(ATO) system for the magnetically levitated vehicles(MAGLEV). One of the human operator's strategies is to predict the control result in order to find the meaningful solution. In this paper, Takagi-Sugeno fuzzy model is used to simulated the prediction procedure. Another str tegy is to evaluate the multiple objectives with respect to their own standards. To realize this strategy, we propose the concept of a satisfactory solution and a satisfactory control scheme. The MAGLEV train is a typical example of the uncertain, complex and large-scale plants. Moreover, the ATO system has to satisfy multiple objectives, such as seed pattern tracking, stop gap accuracy, safety and riding comfort. In this paper, the speed pattern tracking controller and the automatic stop controller of the ATO system is designed based on the proposed control scheme. The effectiveness of the ATO system based on the proposed scheme is shown by the experiments with a rotary test bed and a real MAGLEV train.

  • PDF

A Study for Detecting a Gazing Point Based on Reference Points (참조점을 이용한 응시점 추출에 관한 연구)

  • Kim, S.I.;Lim, J.H.;Cho, J.M.;Kim, S.H.;Nam, T.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.250-259
    • /
    • 2006
  • The information of eye movement is used in various fields such as psychology, ophthalmology, physiology, rehabilitation medicine, web design, HMI(human-machine interface), and so on. Various devices to detect the eye movement have been developed but they are too expensive. The general methods of eye movement tracking are EOG(electro-oculograph), Purkinje image tracker, scleral search coil technique, and video-oculograph(VOG). The purpose of this study is to embody the algorithm which tracks the location of the gazing point at a pupil. Two kinds of location data were compared to track the gazing point. One is the reference points(infrared LEDs) which is effected from the globe. Another is the center point of the pupil which is gained with a CCD camera. The reference point was captured with the CCD camera and infrared lights which were not recognized by human eyes. Both of images which were thrown and were not thrown an infrared light on the globe were captured and saved. The reflected reference points were detected with the brightness difference between the two saved images. In conclusion, the circumcenter theory of a triangle was used to look for the center of the pupil. The location of the gazing point was relatively indicated with the each center of the pupil and the reference point.

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

A Hand Gesture Recognition System using 3D Tracking Volume Restriction Technique (3차원 추적영역 제한 기법을 이용한 손 동작 인식 시스템)

  • Kim, Kyung-Ho;Jung, Da-Un;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.201-211
    • /
    • 2013
  • In this paper, we propose a hand tracking and gesture recognition system. Our system employs a depth capture device to obtain 3D geometric information of user's bare hand. In particular, we build a flexible tracking volume and restrict the hand tracking area, so that we can avoid diverse problems caused by conventional object detection/tracking systems. The proposed system computes running average of the hand position, and tracking volume is actively adjusted according to the statistical information that is computed on the basis of uncertainty of the user's hand motion in the 3D space. Once the position of user's hand is obtained, then the system attempts to detect stretched fingers to recognize finger gesture of the user's hand. In order to test the proposed framework, we built a NUI system using the proposed technique, and verified that our system presents very stable performance even in the case that multiple objects exist simultaneously in the crowded environment, as well as in the situation that the scene is occluded temporarily. We also verified that our system ensures running speed of 24-30 frames per second throughout the experiments.

Navigation Trajectory Control of Security Robots to Restrict Access to Potential Falling Accident Areas for the Elderly (노약자의 낙상가능지역 진입방지를 위한 보안로봇의 주행경로제어)

  • Jin, Taeseok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.6
    • /
    • pp.497-502
    • /
    • 2015
  • One of the goals in the field of mobile robotics is the development of personal service robots for the elderly which behave in populated environments. In this paper, we describe a security robot system and ongoing research results that minimize the risk of the elderly and the infirm to access an area to enter restricted areas with high potential for falls, such as stairs, steps, and wet floors. The proposed robot system surveys a potential falling area with an equipped laser scanner sensor. When it detects walking in elderly or infirm patients who in restricted areas, the robot calculates the velocity vector, plans its own path to forestall the patient in order to prevent them from heading to the restricted area and starts to move along the estimated trajectory. The walking human is assumed to be a point-object and projected onto a scanning plane to form a geometrical constraint equation that provides position data of the human based on the kinematics of the mobile robot. While moving, the robot continues these processes in order to adapt to the changing situation. After arriving at an opposite position to the human's walking direction, the robot advises them to change course. The simulation and experimental results of estimating and tracking of the human in the wrong direction with the mobile robot are presented.

Work chain-based inverse kinematics of robot to imitate human motion with Kinect

  • Zhang, Ming;Chen, Jianxin;Wei, Xin;Zhang, Dezhou
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.511-521
    • /
    • 2018
  • The ability to realize human-motion imitation using robots is closely related to developments in the field of artificial intelligence. However, it is not easy to imitate human motions entirely owing to the physical differences between the human body and robots. In this paper, we propose a work chain-based inverse kinematics to enable a robot to imitate the human motion of upper limbs in real time. Two work chains are built on each arm to ensure that there is motion similarity, such as the end effector trajectory and the joint-angle configuration. In addition, a two-phase filter is used to remove the interference and noise, together with a self-collision avoidance scheme to maintain the stability of the robot during the imitation. Experimental results verify the effectiveness of our solution on the humanoid robot Nao-H25 in terms of accuracy and real-time performance.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Hybrid Silhouette Extraction Using Color and Gradient Informations (색상 및 기울기 정보를 이용한 인간 실루엣 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.913-918
    • /
    • 2007
  • Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.

A methodology for evaluating human operator's fitness for duty in nuclear power plants

  • Choi, Moon Kyoung;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • v.52 no.5
    • /
    • pp.984-994
    • /
    • 2020
  • It is reported that about 20% of accidents at nuclear power plants in Korea and abroad are caused by human error. One of the main factors contributing to human error is fatigue, so it is necessary to prevent human errors that may occur when the task is performed in an improper state by grasping the status of the operator in advance. In this study, we propose a method of evaluating operator's fitness-for-duty (FFD) using various parameters including eye movement data, subjective fatigue ratings, and operator's performance. Parameters for evaluating FFD were selected through a literature survey. We performed experiments that test subjects who felt various levels of fatigue monitor information of indicators and diagnose a system malfunction. In order to find meaningful characteristics in measured data consisting of various parameters, hierarchical clustering analysis, an unsupervised machine-learning technique, is used. The characteristics of each cluster were analyzed; fitness-for-duty of each cluster was evaluated. The appropriateness of the number of clusters obtained through clustering analysis was evaluated using both the Elbow and Silhouette methods. Finally, it was statistically shown that the suggested methodology for evaluating FFD does not generate additional fatigue in subjects. Relevance to industry: The methodology for evaluating an operator's fitness for duty in advance is proposed, and it can prevent human errors that might be caused by inappropriate condition in nuclear industries.

Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement (사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습)

  • Nam, Yun-Young;Ryu, Jung-Hun;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.488-498
    • /
    • 2007
  • This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs) in Ubiquitous Smart Space (USS). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.