• Title/Summary/Keyword: visual tracking

Search Result 529, Processing Time 0.031 seconds

Scaling-Translation Parameter Estimation using Genetic Hough Transform for Background Compensation

  • Nguyen, Thuy Tuong;Pham, Xuan Dai;Jeon, Jae-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.8
    • /
    • pp.1423-1443
    • /
    • 2011
  • Background compensation plays an important role in detecting and isolating object motion in visual tracking. Here, we propose a Genetic Hough Transform, which combines the Hough Transform and Genetic Algorithm, as a method for eliminating background motion. Our method can handle cases in which the background may contain only a few, if any, feature points. These points can be used to estimate the motion between two successive frames. In addition to dealing with featureless backgrounds, our method can successfully handle motion blur. Experimental comparisons of the results obtained using the proposed method with other methods show that the proposed approach yields a satisfactory estimate of background motion.

A Privacy-protection Device Using a Directional Backlight and Facial Recognition

  • Lee, Hyeontaek;Kim, Hyunsoo;Choi, Hee-Jin
    • Current Optics and Photonics
    • /
    • v.4 no.5
    • /
    • pp.421-427
    • /
    • 2020
  • A novel privacy-protection device to prevent visual hacking is realized by using a directional backlight and facial recognition. The proposed method is able to overcome the limitations of previous privacy-protection methods that simply restrict the viewing angle to a narrow range. The accuracy of user tracking is accomplished by the combination of a time-of-flight sensor and facial recognition with no restriction of detection range. In addition, an experimental demonstration is provided to verify the proposed scheme.

비행안전을 고려한 조종사 안구움직임(visual search)의 특성에 관한 연구

  • 최정현;김영준
    • Proceedings of the ESK Conference
    • /
    • 1997.10a
    • /
    • pp.487-497
    • /
    • 1997
  • 본 연구는 F-16 Simulator에서 모의비행하는 조종사를 그 대상으로 하였고 피실험자의 시역에서 물체에 대한 상대적인 눈의 움직임을 분석하고영상처리기법을 사용하는 EL-MAR의 eye-tracking system을 사용하여비행시 cross-check 하는 좆ㅇ사의 준 위치를 정확히 측정하고 안구움직임의 특성을 파악하였다. 비행상황을 정상상황, 비상상황으로 나누고 조종사는 숙련급, 비숙련금조종사로 구분하여측정, 분석하였다. 정상비행상황에서는 숙련급조종사가 비숙련급조 종사보다 계기에 머문시간은 짧고 계기의 관찰횟수는 많았으며 비상비행상황에서는 두 그룹의 조종사가 중점적으로 보는 계기가 달랐다. 조종사들이 많이 관찰하는 계기를 살펴보면, 계기에 머문시간에서는 HSI-ADI-ASI-ALT순으로 나타났고 계기의 관찰횟수에서는 ADI-HSI-ASI-ALT순으로 나타났다.

  • PDF

A Survey of Research on Human-Vehicle Interaction in Defense Area (국방 분야의 인간-차량 인터랙션 연구)

  • Yang, Ji Hyun;Lee, Sang Hun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.3
    • /
    • pp.155-166
    • /
    • 2013
  • We present recent human-vehicle interaction (HVI) research conducted in the area of defense and military application. Research topics discussed in this paper include: training simulation for overland navigation tasks; expertise effects in overland navigation performance and scan patterns; pilot's perception and confidence on an overland navigation task; effects of UAV (Unmanned Aerial Vehicle) supervisory control on F-18 formation flight performance in a simulator environment; autonomy balancing in a manned-unmanned teaming (MUT) swarm attack, enabling visual detection of IED (Improvised Explosive Device) indicators through Perceptual Learning Assessment and Training; usability test on DaViTo (Data Visualization Tool); and modeling peripheral vision for moving target search and detection. Diverse and leading HVI study in the defense domain suggests future research direction in other HVI emerging areas such as automotive industry and aviation domain.

Development of a 3D Simulator and Intelligent Control of Track Vehicle (궤도차량의 지능제어 및 3D 시률레이터 개발)

  • 장영희;신행봉;정동연;서운학;한성현;고희석
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.03a
    • /
    • pp.107-111
    • /
    • 1998
  • This paper presents a now approach to the design of intelligent contorl system for track vehicle system using fuzzy logic based on neural network. The proposed control scheme uses a Gaussian function as a unit function in the neural network-fuzzy, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learning architecture. Moreover, We develop a Windows 95 version dynamic simulator which can simulate a track vehicle model in 3D graphics space. It is proposed a learning controller consisting of two neural network-fuzzy based of independent reasoning and a connection net with fixed weights to simply the neural networks-fuzzy. The dynamic simulator for track vehicle is developed by Microsoft Visual C++. Graphic libraries, OpenGL, by Silicon Graphics, Inc. were utilized for 3D Graphics. The performance of the proposed controller is illustrated by simulation for trajectory tracking of track vehicle speed.

  • PDF

Vision-based AGV Parking System (비젼 기반의 무인이송차량 정차 시스템)

  • Park, Young-Su;Park, Jee-Hoon;Lee, Je-Won;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.473-479
    • /
    • 2009
  • This paper proposes an efficient method to locate the automated guided vehicle (AGV) into a specific parking position using artificial visual landmark and vision-based algorithm. The landmark has comer features and a HSI color arrangement for robustness against illuminant variation. The landmark is attached to left of a parking spot under a crane. For parking, an AGV detects the landmark with CCD camera fixed to the AGV using Harris comer detector and matching descriptors of the comer features. After detecting the landmark, the AGV tracks the landmark using pyramidal Lucas-Kanade feature tracker and a refinement process. Then, the AGV decreases its speed and aligns its longitudinal position with the center of the landmark. The experiments showed the AGV parked accurately at the parking spot with small standard deviation of error under bright illumination and dark illumination.

Interactive visual knowledge acquisition for hand-gesture recognition (손 제스쳐 인식을 위한 상호작용 시각정보 추출)

  • 양선옥;최형일
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.9
    • /
    • pp.88-96
    • /
    • 1996
  • Computer vision-based gesture recognition systems consist of image segmentation, object tracking and decision. However, it is difficult to segment an object from image for gesture in computer systems because of vaious illuminations and backgrounds. In this paper, we describe a method to learn features for segmentation, which improves the performance of computer vision-based hand-gesture recognition systems. Systems interact with a user to acquire exact training data and segment information according to a predefined plan. System provides some models to the user, takes pictures of the user's response and then analyzes the pictures with models and a prior knowledge. The system sends messages to the user and operates learning module to extract information with the analyzed result.

  • PDF

An HMM-Based Segmentation Method for Traffic Monitoring (HMM 분할에 기반한 교통모니터링)

  • 남기환;배철수;정주병;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.587-590
    • /
    • 2004
  • In this paper proposed a HMM(Hidden Martov Model)-based segmentation method which is able to model shadows as well as foreground and background regions. Shadow of moving objects often obstruct visual tracking. We propose an HMM-based segmentation method which classifies in real time oath objects. In the case of traffic monitoring movies, the effectiveness of the proposed method has been proven through experimental results

  • PDF

Tracking Control of a Moving Target Using a Robot Vision System

  • Kim, Dong-Hwan;Cheon, Gyung-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.77.5-77
    • /
    • 2001
  • A Robot vision system with a visual skill so as take information for arbitrary target or object has been applied to auto-inspection and assembling system. It catches the moving target with the manipulator by using the information from the vision system. The robot needs some information where the moving object will place after certain time. A camera is fixed on a robot manipulator, not on the fixed support outside of the robot. It secures wider working area than the fixed camera, and it dedicates to auto scanning of the object. It computes some information on the object center, angle and speed by vision data, and can guess grabbing spot by arriving time. When the location ...

  • PDF

Development of Low Cost Autonomous-Driving Delivery Robot System Using SLAM Technology (SLAM 기술을 활용한 저가형 자율주행 배달 로봇 시스템 개발)

  • Donghoon Lee;Jehyun Park;Kyunghoon Jung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.249-257
    • /
    • 2023
  • This paper discusses the increasing need for autonomous delivery robots due to the current growth in the delivery market, rising delivery fees, high costs of hiring delivery personnel, and the need for contactless services. Additionally, the cost of hardware and complex software systems required to build and operate autonomous delivery robots is high. To provide a low-cost alternative to this, this paper proposes a autonomous delivery robot platform using a low-cost sensor combination of 2D LIDAR, depth camera and tracking camera to replace the existing expensive 3D LIDAR. The proposed robot was developed using the RTAB-Map SLAM open source package for 2D mapping and overcomes the limitations of low-cost sensors by using the convex hull algorithm. The paper details the hardware and software configuration of the robot and presents the results of driving experiments. The proposed platform has significant potential for various industries, including the delivery and other industries.