• Title/Summary/Keyword: vision-based tracking

Search Result 405, Processing Time 0.03 seconds

Robust Gaze-Fixing of an Active Vision System under Variation of System Parameters (시스템 파라미터의 변동 하에서도 강건한 능동적인 비전의 시선 고정)

  • Han, Youngmo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.195-200
    • /
    • 2012
  • To steer a camera is done based on system parameters of the vision system. However, the system parameters when they are used might be different from those when they were measured. As one method to compensate for this problem, this research proposes a gaze-steering method based on LMI(Linear Matrix Inequality) that is robust to variations in the system parameters of the vision system. Simulation results show that the proposed method produces less gaze-tracking error than a contemporary linear method and more stable gaze-tracking error than a contemporary nonlinear method. Moreover, the proposed method is fast enough for realtime processing.

Visual Cell OOK Modulation : A Case Study of MIMO CamCom (시각 셀 OOK 변조 : MIMO CamCom 연구 사례)

  • Le, Nam-Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.9
    • /
    • pp.781-786
    • /
    • 2013
  • Multiplexing information over parallel data channels based on RF MIMO concept is possible to achieve considerable data rates over large transmission ranges with just a single transmitting element. Visual multiplexing MIMO techniques will send independent streams of bits using the multiple elements of the light transmitter array and recording over a group of camera pixels can further enhance the data rates. The proposed system is a combination of the reliance on computer vision algorithms for tracking and OOK cell frame modulation. LED array are controlled to transmit message in the form of digital information using ON-OFF signaling with ON-OFF pulses (ON = bit 1, OFF = bit 0). A camera captures image frames of the array which are then individually processed and sequentially decoded to retrieve data. To demodulated data transmission, a motion tracking algorithm is implemented in OpenCV (Open source Computer Vision library) to classify the transmission pattern. One of the most advantages of proposed architecture is Computer Vision (CV) based image analysis techniques which can be used to spatially separate signals and remove interferences from ambient light. It will be the future challenges and opportunities for mobile communication networking research.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Multi-pedestrian tracking using deep learning technique and tracklet assignment

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.808-810
    • /
    • 2018
  • Pedestrian tracking is a particular problem of object tracking, and an important component in various vision-based applications, such as autonomous cars or surveillance systems. After several years of development, pedestrian tracking in videos is still a challenging problem because of various visual properties of objects and surrounding environment. In this research, we propose a tracking-by-detection system for pedestrian tracking, which incorporates Convolutional Neural Network (CNN) and color information. Pedestrians in video frames are localized by a CNN, then detected pedestrians are assigned to their corresponding tracklets based on similarities in color distributions. The experimental results show that our system was able to overcome various difficulties to produce highly accurate tracking results.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

A Study on Human Body Tracking Method for Application of Smartphones (스마트폰 적용을 위한 휴먼 바디 추적 방법에 대한 연구)

  • Kim, Beom-yeong;Choi, Yu-jin;Jang, Seong-wook;Kim, Yoon-sang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.465-469
    • /
    • 2017
  • In this paper we propose a human body tracking method for application of smartphones. The conventional human body tracking method is divided into a sensor-based method and a vision-based method. The sensor-based methods have a weakness in that tracking accuracy is low due to cumulative error of position information. The vision-based method has no cumulative error, but it requires reduction of the computational complexity for application of smartphone. In this paper we use the improved HOG algorithm as a human body tracking method for application of smartphone. The improved HOG algorithm is implemented through downsampling and frame sampling. Gaussian pyramid is applied for downsampling, and uniform sampling is applied for frame sampling. We measured the proposed algorithm on two devices, four resolutions, and four frame sampling intervals. We derive the best detection rate among downsampling and frame sampling parameters that can be applied in realtime.

  • PDF

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques (컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트)

  • Jo, Ik Hyun;Park, Geo Tae;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Fuzzy Screen Detector for a Vision Based Pointing Device (비젼 기반의 포인팅 기기를 위한 퍼지 스크린 검출기)

  • Kho, Jae-Won
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.3
    • /
    • pp.297-302
    • /
    • 2009
  • In this paper, we propose advanced screen detector as a tool for selecting the object for tracking and estimating its distance from a screen using fuzzy logic in vision based pointing device. Our system classifies the line component of the input image into horizontal and vertical lines and applies the fuzzy rule to obtain the best line pair which constitute peripheral framework of the screen. The proposed system improves the detection ratio for detecting the screen in relative to the detector used in the previous works for hand-held type vision based pointing device. Also it allows to detect the screen even though a small part of it may be hidden behind other object.