• 제목/요약/키워드: real-time visual tracking

검색결과 109건 처리시간 0.021초

Macro-Micro Manipulation with Visual Tracking and its Application to Wheel Assembly

  • Cho Changhyun;Kang Sungchul;Kim Munsang;Song Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권3호
    • /
    • pp.461-468
    • /
    • 2005
  • This paper proposes a wheel-assembly automation system, which assembles a wheel into a hub of a vehicle hung to a moving hanger in a car manufacturing line. A macro-micro manipulator control strategy is introduced to increase the system bandwidth and tracking accuracy to ensure insertion tolerance. A camera is equipped at the newly designed wheel gripper, which is attached at the center of the end-effector of the macro-micro manipulator and is used to measure position error of the hub of the vehicle in real time. The redundancy problem in the macro-micro manipulator is solved without complicated calculation by assigning proper functions to each part so that the macro part tracks the velocity error while the micro part regulates the fine position error. Experimental results indicate that tracking error satisfies the insertion tolerance of assembly $({\pm}1mm)$, and thus it is verified that the proposed system can be applied to the wheel assembly task on a moving hanger in the manufacturing line.

한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구 (A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development)

  • 김정환;홍승권;명노해
    • 대한인간공학회지
    • /
    • 제26권2호
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.

Sector Based Multiple Camera Collaboration for Active Tracking Applications

  • Hong, Sangjin;Kim, Kyungrog;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • 제13권5호
    • /
    • pp.1299-1319
    • /
    • 2017
  • This paper presents a scalable multiple camera collaboration strategy for active tracking applications in large areas. The proposed approach is based on distributed mechanism but emulates the master-slave mechanism. The master and slave cameras are not designated but adaptively determined depending on the object dynamic and density distribution. Moreover, the number of cameras emulating the master is not fixed. The collaboration among the cameras utilizes global and local sectors in which the visual correspondences among different cameras are determined. The proposed method combines the local information to construct the global information for emulating the master-slave operations. Based on the global information, the load balancing of active tracking operations is performed to maximize active tracking coverage of the highly dynamic objects. The dynamics of all objects visible in the local camera views are estimated for effective coverage scheduling of the cameras. The active tracking synchronization timing information is chosen to maximize the overall monitoring time for general surveillance operations while minimizing the active tracking miss. The real-time simulation result demonstrates the effectiveness of the proposed method.

광류를 사용한 빠른 자연특징 추적 (Fast Natural Feature Tracking Using Optical Flow)

  • 배병조;박종승
    • 정보처리학회논문지B
    • /
    • 제17B권5호
    • /
    • pp.345-354
    • /
    • 2010
  • 시각기반 증강현실을 구현하기 위한 추적 방법들은 정형 패턴 마커를 가정하는 마커 추적기법과 영상 특징점을 추출하여 이를 추적하는 자연특징 추적기법으로 분류된다. 마커 추적기법은 빠른 마커의 추출 및 인식이 가능하여 모바일 기기에서도 실시간 처리가 가능하다. 한편 자연 특징 추적기법의 경우는 입력 영상의 다양성을 고려해야 하므로 계산량이 많은 처리과정을 거쳐야 한다. 따라서 저사양의 모바일 기기에서는 빠른 실시간 처리에 어려움이 있다. 기존의 자연특징 추적에서는 입력되는 카메라 영상의 매 프레임마다 특징점을 추출하고 패턴매칭 과정을 거친다. 다수의 자연특징점들을 추출하는 과정과 패턴매칭 과정은 계산량이 많아 실시간 응용에 많은 제약을 가하는 요인으로 작용한다. 특히 등록된 패턴의 개수가 증가될수록 패턴매칭 과정의 처리시간도 증가하게 된다. 본 논문에서는 이러한 단점을 해결하고자 자연특징 추적 과정에 광류를 사용하여 모바일 기기에서의 실시간 동작이 가능하도록 하였다. 패턴매칭에 사용된 특징점들은 다음의 연속 프레임에서 광류추적 기법을 적용하여 대응점들을 빠르게 찾도록 하였다. 또한 추적 과정에서 소실되는 특징점의 수에 비례하여 새로운 특징점들을 추가하여 특징점의 전체 개수는 일정 수준으로 유지되도록 하였다. 실험 결과 제안하는 추적 방법은 자연특징점 추적 시간을 상당히 단축시킬 뿐만 아니라 카메라 자세 추정 결과도 더욱 안정시킴을 보여주었다.

메뉴 구조의 평가 방법론으로서 활성화 확산 모델의 타당성 검증: Eye-Tracking 접근 방법 (The Validation of Spreading Activation Model as Evaluation Methodology of Menu Structure: Eye Tracking Approach)

  • 박종순;명노해
    • 대한인간공학회지
    • /
    • 제26권2호
    • /
    • pp.103-112
    • /
    • 2007
  • This study was designed to validate Spreading Activation Theory (SAT) for an evaluation methodology for menu structure through Eye-Tracking approach. When a visual search is on the way, more eye fixations and time are necessary to visually process complex and vague area. From the aspect of recognition, well-designed menu structures were hypothesized to have fewer numbers of fixations and shorter duration because well-designed menu structures reflecting the users' mental model would be well matched with the product's menu structure, resulting in reducing the number of fixations and duration time. The results show that the shorter reaction times for SAT had significantly fewer numbers of fixation and shorter duration time as the hypothesis for this study stated. In conclusion, SAT was proved to be an effective evaluation methodology for menu structure with the eye tracking equipment. In addition, using SAT instead of the real performance experiment would be useful for designing user-centered systems and convenient information structures because SAT was proven to be the theoretical background for design and evaluation of menu structures.

Adaptive Weight Collaborative Complementary Learning for Robust Visual Tracking

  • Wang, Benxuan;Kong, Jun;Jiang, Min;Shen, Jianyu;Liu, Tianshan;Gu, Xiaofeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권1호
    • /
    • pp.305-326
    • /
    • 2019
  • Discriminative correlation filter (DCF) based tracking algorithms have recently shown impressive performance on benchmark datasets. However, amount of recent researches are vulnerable to heavy occlusions, irregular deformations and so on. In this paper, we intend to solve these problems and handle the contradiction between accuracy and real-time in the framework of tracking-by-detection. Firstly, we propose an innovative strategy to combine the template and color-based models instead of a simple linear superposition and rely on the strengths of both to promote the accuracy. Secondly, to enhance the discriminative power of the learned template model, the spatial regularization is introduced in the learning stage to penalize the objective boundary information corresponding to features in the background. Thirdly, we utilize a discriminative multi-scale estimate method to solve the problem of scale variations. Finally, we research strategies to limit the computational complexity of our tracker. Abundant experiments demonstrate that our tracker performs superiorly against several advanced algorithms on both the OTB2013 and OTB2015 datasets while maintaining the high frame rates.

HMM 분할에 기반한 교통모니터링에 관한 연구 (A Study on HMM-Based Segmentation Method for Traffic Monitoring)

  • 황선기;강용석;김태우;김현열;박영철;배철수
    • 한국정보전자통신기술학회논문지
    • /
    • 제5권1호
    • /
    • pp.1-6
    • /
    • 2012
  • 본 논문에서는 HMM(Hidden Markov Model)방법에 기초하여 전경과 배경영역 뿐만 아니라 그림자 까지도 분할 할 수 있는 교통모니터링 방법을 제안하였다. 움직이는 물체의 그림자는 시각적 추적을 방해하기 때문에 이러한 문제점을 해결하기 위한 방법으로 각 화소나 영역을 3개의 카테고리 즉, 그림자, 전경, 배경물체로 분할하였다. 교통 모니터링 영상의 경우, 실험결과를 통해 제안된 방법의 효율성을 입증 할 수 있었다.

멀티모드 커널 가중치 기반 객체 추적 (Multi-mode Kernel Weight-based Object Tracking)

  • 김은섭;김용구;최유주
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제21권4호
    • /
    • pp.11-17
    • /
    • 2015
  • 최근, 감시시스템, 게임, 영화등 다양한 분야에서 영상을 이용한 실시간 객체 추적의 필요성이 높아짐에 따라, 커널기반 mean-shift 추적 기법에 대한 관심이 높아지고 있다. 커널 기반 mean-shift 객체 추적에 있어서 주요한 몇 가지 문제점들 중, 첫번째로 추적 목표 객체에 대한 부분 가림 흑은 전체 가림 상황에서의 객체 추적의 문제를 들 수 있다. 본 논문에서는 멀티모드 지역적 커널 가중치를 적용함드로써 부분 가림 상황에서도 안정적드로 객체를 추적할 수 있는 실시간 mean-shift 추적 기법을 제안한다. 제안기법에서는 단일 커널을 사용하는 대신 여러 개의 서브 커널들로 구성된 커널을 사용하고, 각 서브 커널의 위치에 따른 지역적 커널 가중치를 적용한다. 기존의 멀티모드 커널 기반의 방법과 비교한 실힘을 통하여 본 제안 방법이 보다 안정적드로 객체를 추적할 수 있음을 보였다.

이동 물체 포착을 위한 비젼 서보 제어 시스템 개발 (Development of Visual Servo Control System for the Tracking and Grabbing of Moving Object)

  • 최규종;조월상;안두성
    • 동력기계공학회지
    • /
    • 제6권1호
    • /
    • pp.96-101
    • /
    • 2002
  • In this paper, we address the problem of controlling an end-effector to track and grab a moving target using the visual servoing technique. A visual servo mechanism based on the image-based servoing principle, is proposed by using visual feedback to control an end-effector without calibrated robot and camera models. Firstly, we consider the control problem as a nonlinear least squares optimization and update the joint angles through the Taylor Series Expansion. And to track a moving target in real time, the Jacobian estimation scheme(Dynamic Broyden's Method) is used to estimate the combined robot and image Jacobian. Using this algorithm, we can drive the objective function value to a neighborhood of zero. To show the effectiveness of the proposed algorithm, simulation results for a six degree of freedom robot are presented.

  • PDF

지능형 로보트 시스템을 위한 영역기반 Q-learning (Region-based Q-learning for intelligent robot systems)

  • 김재현;서일홍
    • 제어로봇시스템학회논문지
    • /
    • 제3권4호
    • /
    • pp.350-356
    • /
    • 1997
  • It is desirable for autonomous robot systems to possess the ability to behave in a smooth and continuous fashion when interacting with an unknown environment. Although Q-learning requires a lot of memory and time to optimize a series of actions in a continuous state space, it may not be easy to apply the method to such a real environment. In this paper, for continuous state space applications, to solve problem and a triangular type Q-value model\ulcorner This sounds very ackward. What is it you want to solve about the Q-value model. Our learning method can estimate a current Q-value by its relationship with the neighboring states and has the ability to learn its actions similar to that of Q-learning. Thus, our method can enable robots to move smoothly in a real environment. To show the validity of our method, navigation comparison with Q-learning are given and visual tracking simulation results involving an 2-DOF SCARA robot are also presented.

  • PDF