• 제목/요약/키워드: Human Tracking

검색결과 652건 처리시간 0.033초

퍼지 모델 기반 다목적 제어기의 설계와 자기부상열차 자동운전시스템에의 적용 (Design of Fuzzy Model-based Multi-objective Controller and Its Application to MAGLEV ATO system)

  • 강동오;양세현;변증남
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 추계학술대회 학술발표 논문집
    • /
    • pp.211-217
    • /
    • 1998
  • Many practical control problems for the complex, uncertain or large-scale plants, need to simultaneously achieve a number of objectives, which may conflict or compete with each other. If the conventional optimization methods are applied to solve these control problems, the solution process may be time-consuming and the resulting solution would ofter lose its original meaning of optimality. Nevertheless, the human operators usually performs satisfactory results based on their qualitative and heuristic knowledge. In this paper, we investigate the control strategies of the human operators, and propose a fuzzy model-based multi-objective satisfactory controller. We also apply it to the automatic train operation(ATO) system for the magnetically levitated vehicles(MAGLEV). One of the human operator's strategies is to predict the control result in order to find the meaningful solution. In this paper, Takagi-Sugeno fuzzy model is used to simulated the prediction procedure. Another str tegy is to evaluate the multiple objectives with respect to their own standards. To realize this strategy, we propose the concept of a satisfactory solution and a satisfactory control scheme. The MAGLEV train is a typical example of the uncertain, complex and large-scale plants. Moreover, the ATO system has to satisfy multiple objectives, such as seed pattern tracking, stop gap accuracy, safety and riding comfort. In this paper, the speed pattern tracking controller and the automatic stop controller of the ATO system is designed based on the proposed control scheme. The effectiveness of the ATO system based on the proposed scheme is shown by the experiments with a rotary test bed and a real MAGLEV train.

  • PDF

참조점을 이용한 응시점 추출에 관한 연구 (A Study for Detecting a Gazing Point Based on Reference Points)

  • 김성일;임재홍;조종만;김수홍;남태우
    • 대한의용생체공학회:의공학회지
    • /
    • 제27권5호
    • /
    • pp.250-259
    • /
    • 2006
  • The information of eye movement is used in various fields such as psychology, ophthalmology, physiology, rehabilitation medicine, web design, HMI(human-machine interface), and so on. Various devices to detect the eye movement have been developed but they are too expensive. The general methods of eye movement tracking are EOG(electro-oculograph), Purkinje image tracker, scleral search coil technique, and video-oculograph(VOG). The purpose of this study is to embody the algorithm which tracks the location of the gazing point at a pupil. Two kinds of location data were compared to track the gazing point. One is the reference points(infrared LEDs) which is effected from the globe. Another is the center point of the pupil which is gained with a CCD camera. The reference point was captured with the CCD camera and infrared lights which were not recognized by human eyes. Both of images which were thrown and were not thrown an infrared light on the globe were captured and saved. The reflected reference points were detected with the brightness difference between the two saved images. In conclusion, the circumcenter theory of a triangle was used to look for the center of the pupil. The location of the gazing point was relatively indicated with the each center of the pupil and the reference point.

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • 제21권2호
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

3차원 추적영역 제한 기법을 이용한 손 동작 인식 시스템 (A Hand Gesture Recognition System using 3D Tracking Volume Restriction Technique)

  • 김경호;정다운;이석한;최종수
    • 전자공학회논문지
    • /
    • 제50권6호
    • /
    • pp.201-211
    • /
    • 2013
  • 본 논문에서는 손 추적과 제스처 인식 시스템을 제안한다. 제안한 시스템은 사용자 손의 3차원 기하학적 정보를 취득하기 위해 별도의 장비를 사용한다. 특히, 기존의 물체 검출 및 추적 시스템들에서 제기 되었던 추적 과정에서의 문제점을 피하기 위해 능동적인 타원체 영역을 만들고 손 추적을 위한 영역을 타원체 영역의 안으로 제한했다. 제안된 시스템은 미리 정의된 기간 동안에 손 위치의 이동평균을 계산한다. 그리고 추적영역은 3차원 공간에 편성된 공분산에 기반한 사용자 손 움직임의 불확실성을 추정하여 통계적인 데이터에 따라 능동적으로 제어하였다. 또한 손 위치가 획득되었을 때, 손 제스처를 인식하기 위해 펼쳐진 손가락을 검출한다. 사용자 인터페이스 체제 기반의 시스템을 구현하여 복잡한 환경에서 다중의 대상들이 동시에 존재하는 경우이거나 일시적인 가려짐이 발생하는 경우에도 정확성을 보여 매우 안정적으로 동작할 수 있음을 보여주며, 약 24-30fps의 프레임 비율로 사용할 수 있는 가능성을 보여주었다.

노약자의 낙상가능지역 진입방지를 위한 보안로봇의 주행경로제어 (Navigation Trajectory Control of Security Robots to Restrict Access to Potential Falling Accident Areas for the Elderly)

  • 진태석
    • 제어로봇시스템학회논문지
    • /
    • 제21권6호
    • /
    • pp.497-502
    • /
    • 2015
  • One of the goals in the field of mobile robotics is the development of personal service robots for the elderly which behave in populated environments. In this paper, we describe a security robot system and ongoing research results that minimize the risk of the elderly and the infirm to access an area to enter restricted areas with high potential for falls, such as stairs, steps, and wet floors. The proposed robot system surveys a potential falling area with an equipped laser scanner sensor. When it detects walking in elderly or infirm patients who in restricted areas, the robot calculates the velocity vector, plans its own path to forestall the patient in order to prevent them from heading to the restricted area and starts to move along the estimated trajectory. The walking human is assumed to be a point-object and projected onto a scanning plane to form a geometrical constraint equation that provides position data of the human based on the kinematics of the mobile robot. While moving, the robot continues these processes in order to adapt to the changing situation. After arriving at an opposite position to the human's walking direction, the robot advises them to change course. The simulation and experimental results of estimating and tracking of the human in the wrong direction with the mobile robot are presented.

Work chain-based inverse kinematics of robot to imitate human motion with Kinect

  • Zhang, Ming;Chen, Jianxin;Wei, Xin;Zhang, Dezhou
    • ETRI Journal
    • /
    • 제40권4호
    • /
    • pp.511-521
    • /
    • 2018
  • The ability to realize human-motion imitation using robots is closely related to developments in the field of artificial intelligence. However, it is not easy to imitate human motions entirely owing to the physical differences between the human body and robots. In this paper, we propose a work chain-based inverse kinematics to enable a robot to imitate the human motion of upper limbs in real time. Two work chains are built on each arm to ensure that there is motion similarity, such as the end effector trajectory and the joint-angle configuration. In addition, a two-phase filter is used to remove the interference and noise, together with a self-collision avoidance scheme to maintain the stability of the robot during the imitation. Experimental results verify the effectiveness of our solution on the humanoid robot Nao-H25 in terms of accuracy and real-time performance.

3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델 (Kinect-based Motion Recognition Model for the 3D Contents Control)

  • 최한석
    • 한국콘텐츠학회논문지
    • /
    • 제14권1호
    • /
    • pp.24-29
    • /
    • 2014
  • 본 논문에서는 키넥트 적외선 프로젝터를 통해 깊이를 감지할 수 있는 카메라를 이용하여 사람 움직임을 추적하고 본 논문에서 제안한 몸동작 모델 인식을 통하여 3D 콘텐츠를 제어하는 기법을 제안 한다. 본 논문에서 제안하는 사람의 동작 인식 모델은 사람의 오른팔과 왼팔의 손목, 팔꿈치, 어께 움직임의 거리를 계산하여 좌, 우, 상, 하, 확대, 축소, 선택 등의 7가지 동작 상태를 인식한다. 본 연구에서 제안한 키넥트 기반의 동작 인식 모델은 기존의 접촉식 방식의 인터페이스와 비교할 때 특정센서 또는 장비 부착에 대한 불편함을 없애고 고비용의 하드웨어 시스템을 이용하지 않음으로서 사람의 자연스런 몸동작 이동에 따른 저 비용 3D 콘텐츠 제어 기술을 보여준다.

색상 및 기울기 정보를 이용한 인간 실루엣 추출 (Hybrid Silhouette Extraction Using Color and Gradient Informations)

  • 주영훈;소제윤
    • 한국지능시스템학회논문지
    • /
    • 제17권7호
    • /
    • pp.913-918
    • /
    • 2007
  • 본 논문에서는 인간과 로봇의 인터액션을 위해 연속된 이미지 정보로부터 얻을 수 있는 색상(color)과 기울기(gradient) 정보를 이용하여 인간 몸의 실루엣 추출 기법을 제안한다. 연속된 이미지 정보로부터 얻어진 RGB 영상 정보에서 색상 배경 제거 기법은 각각의 신체 비율 정보로부터 추출된 모션 영역 정보에서 색상 공판별 평균 이미지 정보를 얻고 옷 색상 정보를 볼록 합하여 계산된다. 기울기 배경 제거 기법은 공간상 정보와 시간상 정보의 볼록 합으로 계산된다. 최종적으로 색상 정보와 기울기 정보의 볼록 합을 하여 인간 몸의 실루엣을 추출한다. 마지막으로, 실험을 통하여 제안된 기법의 성능을 확인하였다.

A methodology for evaluating human operator's fitness for duty in nuclear power plants

  • Choi, Moon Kyoung;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • 제52권5호
    • /
    • pp.984-994
    • /
    • 2020
  • It is reported that about 20% of accidents at nuclear power plants in Korea and abroad are caused by human error. One of the main factors contributing to human error is fatigue, so it is necessary to prevent human errors that may occur when the task is performed in an improper state by grasping the status of the operator in advance. In this study, we propose a method of evaluating operator's fitness-for-duty (FFD) using various parameters including eye movement data, subjective fatigue ratings, and operator's performance. Parameters for evaluating FFD were selected through a literature survey. We performed experiments that test subjects who felt various levels of fatigue monitor information of indicators and diagnose a system malfunction. In order to find meaningful characteristics in measured data consisting of various parameters, hierarchical clustering analysis, an unsupervised machine-learning technique, is used. The characteristics of each cluster were analyzed; fitness-for-duty of each cluster was evaluated. The appropriateness of the number of clusters obtained through clustering analysis was evaluated using both the Elbow and Silhouette methods. Finally, it was statistically shown that the suggested methodology for evaluating FFD does not generate additional fatigue in subjects. Relevance to industry: The methodology for evaluating an operator's fitness for duty in advance is proposed, and it can prevent human errors that might be caused by inappropriate condition in nuclear industries.

사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습 (Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement)

  • 남윤영;류정훈;최유주;조위덕
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제13권7호
    • /
    • pp.488-498
    • /
    • 2007
  • 본 논문은 유비쿼터스 스마트 공간에서 중첩 FOV와 비중첩 FOV에 대한 카메라 네트워크의 시공간 위상을 표현하는 새로운 방법을 제안한다. 제안된 방법을 이용하여 다중 카메라들간의 움직이는 객체들을 인식 및 추적하였으며 이를 통해 카메라 네트워크의 위상을 결정하였다. 다중 카메라의 영상으로부터 여러 객체들을 추적하기 위해 여러 가지 방법들을 사용하였다. 우선, 단일 카메라에서 객체들의 겹침 문제를 해결하기 위해서 병합-분리(Merge-Split) 방법을 사용하였으며, 보다 정확한 객체 특성을 추출하기 위해 그리드 기반의 부분 추출 방법을 사용하였다. 또한, 비중첩 FOV를 포함하는 다중 카메라의 보이지 않는 지역에 대한 객체 추적을 위해 등장과 퇴장 영역간의 전이시간과 사람들의 외형 정보를 고려하였다. 본 논문에서는 다양한 등장과 퇴장 영역간의 전이시간을 추정하고 전이확률을 이용하여 무방향 가중치 그래프로써 카메라 위상을 가시적으로 표현하였다.