• Title/Summary/Keyword: CAMshift

Search Result 56, Processing Time 0.035 seconds

Active Object Tracking based on hierarchical application of Region and Color Information (지역정보와 색 정보의 계층적 적용에 의한 능동 객체 추적)

  • Jeong, Joon-Yong;Lee, Kyu-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.633-636
    • /
    • 2010
  • 본 논문에서 Pan, Tilt 카메라를 이용한 객체 추적을 위하여 초기 지역정보를 이용하여 객체를 검출하고 검출된 객체의 색 정보를 이용하여 능동 객체를 추적하는 기술을 제안한다. 외부 환경의 잡음을 제거하기 위해 적응적인 가우시안 혼합 모델링을 이용하여 배경과 객체를 분리한다. 객체가 정해지면 카메라가 이동하는 동안에도 추적이 가능한 CAMShift 추적 알고리즘을 이용하여 객체를 실시간으로 추적한다. CAMShift 추적 알고리즘은 객체의 크기를 계산하므로 객체의 크기가 변하더라도 유동적인 객체 판별이 가능하다. Pan, Tilt의 위치는 구좌표계(Spherical coordinates system)를 이용하여 계산하였다. 이렇게 구해진 Pan, Tilt 위치는 Pan, Tilt 프로토콜을 이용하여 객체의 위치를 화면의 중심에 놓이게 함으로써 적합한 추적을 가능하게 한다.

Robust Eye Region Discrimination and Eye Tracking to the Environmental Changes (환경변화에 강인한 눈 영역 분리 및 안구 추적에 관한 연구)

  • Kim, Byoung-Kyun;Lee, Wang-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1171-1176
    • /
    • 2014
  • The eye-tracking [ET] is used on the human computer interaction [HCI] analysing the movement status as well as finding the gaze direction of the eye by tracking pupil's movement on a human face. Nowadays, the ET is widely used not only in market analysis by taking advantage of pupil tracking, but also in grasping intention, and there have been lots of researches on the ET. Although the vision based ET is known as convenient in application point of view, however, not robust in changing environment such as illumination, geometrical rotation, occlusion and scale changes. This paper proposes two steps in the ET, at first, face and eye regions are discriminated by Haar classifier on the face, and then the pupils from the discriminated eye regions are tracked by CAMShift as well as Template matching. We proved the usefulness of the proposed algorithm by lots of real experiments in changing environment such as illumination as well as rotation and scale changes.

Comparative Performance Evaluations of Eye Detection algorithm (눈 검출 알고리즘에 대한 성능 비교 연구)

  • Gwon, Su-Yeong;Cho, Chul-Woo;Lee, Won-Oh;Lee, Hyeon-Chang;Park, Kang-Ryoung;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.722-730
    • /
    • 2012
  • Recently, eye image information has been widely used for iris recognition or gaze detection in biometrics or human computer interaction. According as long distance camera-based system is increasing for user's convenience, the noises such as eyebrow, forehead and skin areas which can degrade the accuracy of eye detection are included in the captured image. And fast processing speed is also required in this system in addition to the high accuracy of eye detection. So, we compared the most widely used algorithms for eye detection such as AdaBoost eye detection algorithm, adaptive template matching+AdaBoost algorithm, CAMShift+AdaBoost algorithm and rapid eye detection method. And these methods were compared with images including light changes, naive eye and the cases wearing contact lens or eyeglasses in terms of accuracy and processing speed.

Efficient Fingertip Tracking and Mouse Pointer Control for Implementation of a Human Mouse (휴먼마우스 구현을 위한 효율적인 손끝좌표 추적 및 마우스 포인트 제어기법)

  • 박지영;이준호
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.851-859
    • /
    • 2002
  • This paper discusses the design of a working system that visually recognizes hand gestures for the control of a window based user interface. We present a method for tracking the fingertip of the index finger using a single camera. Our method is based on CAMSHIFT algorithm and performs better than the CAMSHIFT algorithm in that it tracks well particular hand poses used in the system in complex backgrounds. We describe how the location of the fingertip is mapped to a location on the monitor, and how it Is both necessary and possible to smooth the path of the fingertip location using a physical model of a mouse pointer. Our method is able to track in real time, yet not absorb a major share of computational resources. The performance of our system shows a great promise that we will be able to use this methodology to control computers in near future.

Hybrid Approach of Texture and Connected Component Methods for Text Extraction in Complex Images (복잡한 영상 내의 문자영역 추출을 위한 텍스춰와 연결성분 방법의 결합)

  • 정기철
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.175-186
    • /
    • 2004
  • We present a hybrid approach of texture-based method and connected component (CC)-based method for text extraction in complex images. Two primary methods, which are mainly utilized in this area, are sequentially merged for compensating for their weak points. An automatically constructed MLP-based texture classifier can increase recall rates for complex images with small amount of user intervention and without explicit feature extraction. CC-based filtering based on the shape information using NMF enhances the precision rate without affecting overall performance. As a result, a combination of texture and CC-based methods leads to not only robust but also efficient text extraction. We also enhance the processing speed by adopting appropriate region marking methods for each input image category.

Objects Tracking of the Mobile Robot Using the Hybrid Visual Servoing (혼합 비주얼 서보잉을 통한 모바일 로봇의 물체 추종)

  • Park, Kang-IL;Woo, Chang-Jun;Lee, Jangmyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.781-787
    • /
    • 2015
  • This paper proposes a hybrid visual servoing algorithm for the object tracking by a mobile robot with the stereo camera. The mobile robot with the stereo camera performs an object recognition and object tracking using the SIFT and CAMSHIFT algorithms for the hybrid visual servoing. The CAMSHIFT algorithm using stereo camera images has been used to obtain the three-dimensional position and orientation of the mobile robot. With the hybrid visual servoing, a stable balance control has been realized by a control system which calculates a desired angle of the center of gravity whose location depends on variations of link rotation angles of the manipulator. A PID controller algorithm has adopted in this research for the control of the manipulator since the algorithm is simple to design and it does not require unnecessary complex dynamics. To demonstrate the control performance of the hybrid visual servoing, real experiments are performed using the mobile manipulator system developed for this research.

Tracking of Moving Ball for Ball-Plate System (Ball-Plate 시스템을 위한 움직이는 공의 추적)

  • Park, Yi-Keun;Park, Ju-Youn;Park, Seong-Mo
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.143-146
    • /
    • 2012
  • Ball-Plate를 기반으로 하는 균형제어 로봇은 크게 공의 상태를 파악하는 부분과 균형을 유지하는 제어부분 2가지로 구성되어진다. 본 논문은 공의 상태를 파악하기 위해서 단일 카메라를 이용하여 CAMShift 알고리즘으로 볼을 추적한다. 그리고 칼만 필터를 사용하여 발생하는 오차를 줄이는 방법을 제안하고 그 실험 결과에 대해서 설명한다.

  • PDF

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.

Text Cues-based Image Matching Method for Navigation (네비게이션을 위한 문자영상기반의 영상매칭 방법)

  • Park, An-Jin;Jung, Kee-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.631-633
    • /
    • 2005
  • 유비쿼터스 시대가 다가오면서, 많은 사람들은 모르는 장소에서 자신의 위치와 목적지까지의 경로에 대한 정보를 알고 싶어할 것이다. 기존의 네비게이션(navigation)을 위한 비전기술은 고차원과 저차원 특징값을 이용하였다. 텍스춰 정보, 색상 히스토그램과 같은 저차원 특징값은 영상의 특징을 정확하게 표현하기 어려우며, 마커와 같은 고차원 정보는 실험환경을 구축하는데 어려움이 있다. 우리는 기존 저/고차원의 특징값 대신, 영상의 특징을 표현하고 인덱싱(indexing)하기 위한 유용한 정보를 많이 포함하고 있으며, 실제환경에서 널리 분포되어있는 중차원 특징값인 문자영상을 이용한다. 문자영상추출은 MLP(Multi-layer perceptron)와 CAMShift알고리즘을 결합한 방법을 이용하며, 서로 다른 장소지만 같은 문자를 가진 곳에서 인식을 수행하기 위해 문자영상의 크기와 기울기를 기반으로 한 영상 검색공간을 대상으로 영상매칭을 수행한다. 실험에서 문자영상을 포함하는 직사각형 검색공간으로 인해 다양한 크기와 기울기에서 높은 인식률을 보이며, 간단한 계산으로 빠른 수행시간을 가진다.

  • PDF

Detecting pedestrians from depth images using Kinect (키넥트를 이용한 깊이 영상에서 보행자 탐지)

  • Cho, Jae-hyeon;Moon, Nam-me
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.40-42
    • /
    • 2019
  • 색상 영상과 이에 상응하는 깊이 영상으로 3차원 비디오를 만드는 방법은 최근 키넥트 깊이 카메라와 같이 저가임에도 불구하고 높은 성능을 보이는 카메라가 시중에 출시되면서 다양한 형태의 응용분야에 많이 사용되기 시작했다[1]. 본 연구는 TOF(Time Of Flight) 카메라와 RGB 카메라가 같이 있는 키넥트를 이용해서 깊이 영상에서 보행자를 탐지한다. 전처리 작업으로 배경 깊이 맵을 미리 저장하고, 깊이의 차이로 보행자 유무를 알아낸다. 보행자를 지속적으로 탐지하기 위해 CAMShift 알고리즘을 사용해 라벨링과 보행자 추적을 하며, 보행자의 진행 방향과 속도를 탐지하기 위해 Dense Optical Flow를 사용해 보행자의 벡터 정보를 저장한다. 보행자가 깊이 맵 밖으로 나가면 해당 보행자에 대한 탐지를 종료한다.