• Title/Summary/Keyword: Human Tracking

Search Result 655, Processing Time 0.034 seconds

An Innovative Approach to Track Moving Object based on RFID and Laser Ranging Information

  • Liang, Gaoli;Liu, Ran;Fu, Yulu;Zhang, Hua;Wang, Heng;Rehman, Shafiq ur;Guo, Mingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.131-147
    • /
    • 2020
  • RFID (Radio Frequency Identification) identifies a specific object by radio signals. As the tag provides a unique ID for the purpose of identification, RFID technology effectively solves the ambiguity and occlusion problem that challenges the laser or camera-based approach. This paper proposes an approach to track a moving object based on the integration of RFID and laser ranging information using a particle filter. To be precise, we split laser scan points into different clusters which contain the potential moving objects and calculate the radial velocity of each cluster. The velocity information is compared with the radial velocity estimated from RFID phase difference. In order to achieve the positioning of the moving object, we select a number of K best matching clusters to update the weights of the particle filter. To further improve the positioning accuracy, we incorporate RFID signal strength information into the particle filter using a pre-trained sensor model. The proposed approach is tested on a SCITOS service robot under different types of tags and various human velocities. The results show that fusion of signal strength and laser ranging information has significantly increased the positioning accuracy when compared to radial velocity matching-based or signal strength-based approaches. The proposed approach provides a solution for human machine interaction and object tracking, which has potential applications in many fields for example supermarkets, libraries, shopping malls, and exhibitions.

A Study on Relative Type of Events and Place relevant to Change of Spatial View (공간관의 변화에 따른 사건과 장소의 관계 유형에 관한 연구)

  • Hwang, Yong-Seup;Kim, Joo-Yun
    • Korean Institute of Interior Design Journal
    • /
    • v.18 no.2
    • /
    • pp.25-32
    • /
    • 2009
  • To the human being, the space as the objective for fundamental thought called 'Being', and the space as the place for actual experience containing 'Life and Experience', have coexisted. Such dual characteristics owned by the space have come to the present creating various spatial viewpoints. As for the space as the objective for various thoughts and interpretations, its emphasis is nowadays being moved to the existence from such being. There is the event in the core of such change. Human's recognition for the space is originated from the observed events, which appears as an immense meaning, as it 'is accumulated as the experience. Therefore, the importance of the event in another viewpoint thinking the space is becoming bigger. This research has tried to grasp the tendency for the place-orientation by grasping the meaning owned by the event through the course that the viewpoint for such space has been changed. This research put its focus on clarifying the relationship between the place and event that are brought into relief by tracking down the transitional process of the viewpoint of space in the history of Occidental philosophy. Through this process, this research came to have a grip on the relationship in which a sense of place obviously stands out in bold relief as a result of an event, an emblem of an event, condition of an event, and as an event itself. Such four patterns are not mutually individual phenomenon, but form a discourse on modern space by traversing each other crisscross.

A Study on Intelligent Control of Real-Time Working Motion Generation of Bipped Robot (2족 보행로봇의 실시간 작업동작 생성을 위한 지능제어에 관한 연구)

  • Kim, Min-Seong;Jo, Sang-Young;Koo, Young-Mok;Jeong, Yang-Gun;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.19 no.1
    • /
    • pp.1-9
    • /
    • 2016
  • In this paper, we propose a new learning control scheme for various walk motion control of biped robot with same learning-base by neural network. We show that learning control algorithm based on the neural network is significantly more attractive intelligent controller design than previous traditional forms of control systems. A multi layer back propagation neural network identification is simulated to obtain a dynamic model of biped robot. Once the neural network has learned, the other neural network control is designed for various trajectory tracking control with same learning-base. The biped robots have been received increased attention due to several properties such as its human like mobility and the high-order dynamic equation. These properties enable the biped robots to perform the dangerous works instead of human beings. Thus, the stable walking control of the biped robots is a fundamentally hot issue and has been studied by many researchers. However, legged locomotion, it is difficult to control the biped robots. Besides, unlike the robot manipulator, the biped robot has an uncontrollable degree of freedom playing a dominant role for the stability of their locomotion in the biped robot dynamics. From the simulation and experiments the reliability of iterative learning control was illustrated.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

Development of Adaptive Ground Control System for Multi-UAV Operation and Operator Overload Analysis (복수 무인기 운용을 위한 적응형 지상체 개발 및 운용자 과부하 분석)

  • Oh, Jangjin;Choi, Seong-Hwan;Lim, Hyung-Jin;Kim, Seungkeun;Yang, Ji Hyun;Kim, Byoung Soo
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.6
    • /
    • pp.529-536
    • /
    • 2017
  • The general ground control system has control and information display functions for the operation of a single unmanned aerial vehicle. Recently, the function of the single ground control system extends to the operation of multiple UAVs. As a result, operators have been exposed to more diverse tasks and are subject to task overload due to various factors during their mission. This study proposes an adaptive ground control system that reflects the operator's condition through the task overload measurement of multiple UAV operators. For this, the ground control software is developed to control multiple UAVs at the same time, and the simulator with six degree-of-freedom aircraft dynamics is constructed for realistic human-machine-interface experiments by the operators.

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

A Novel Two-Level Pitch Detection Approach for Speaker Tracking in Robot Control

  • Hejazi, Mahmoud R.;Oh, Han;Kim, Hong-Kook;Ho, Yo-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.89-92
    • /
    • 2005
  • Using natural speech commands for controlling a human-robot is an interesting topic in the field of robotics. In this paper, our main focus is on the verification of a speaker who gives a command to decide whether he/she is an authorized person for commanding. Among possible dynamic features of natural speech, pitch period is one of the most important ones for characterizing speech signals and it differs usually from person to person. However, current techniques of pitch detection are still not to a desired level of accuracy and robustness. When the signal is noisy or there are multiple pitch streams, the performance of most techniques degrades. In this paper, we propose a two-level approach for pitch detection which in compare with standard pitch detection algorithms, not only increases accuracy, but also makes the performance more robust to noise. In the first level of the proposed approach we discriminate voiced from unvoiced signals based on a neural classifier that utilizes cepstrum sequences of speech as an input feature set. Voiced signals are then further processed in the second level using a modified standard AMDF-based pitch detection algorithm to determine their pitch periods precisely. The experimental results show that the accuracy of the proposed system is better than those of conventional pitch detection algorithms for speech signals in clean and noisy environments.

  • PDF

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Study of an Optical Goniometer Using a Multi-Photodiode Sensor

  • Kim, Ji-Sun;Kim, A-Hee;Oh, Han-Byeol;Kim, Jun-Sik;Goh, Bong-Jun;Lee, Eun-Suk;Choi, Ju-Hyeon;Baek, Jin-Young;Jun, Jae-Hoon
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.22-28
    • /
    • 2016
  • The monitoring and measurement of the motion of a human joint is very important in screening for degenerative brain diseases and tracking the rehabilitation process. Since there are various medical fields to benefit from angular motion measurement, the necessity for monitoring of human joint movement is increasing. In this study, the optical sensor is composed of a light emission unit with a red LED and an optical fiber, and a reception unit with an arrangement of three photodiodes. The angular detection range was widened with the use of multiple photodiodes and the developed algorithm. The result will be useful for designing an effective angular sensor with low cost and small size.