• Title/Summary/Keyword: model-based marker tracking

Search Result 9, Processing Time 0.027 seconds

Dynamic Manipulation of a Virtual Object in Marker-less AR system Based on Both Human Hands

  • Chun, Jun-Chul;Lee, Byung-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.618-632
    • /
    • 2010
  • This paper presents a novel approach to control the augmented reality (AR) objects robustly in a marker-less AR system by fingertip tracking and hand pattern recognition. It is known that one of the promising ways to develop a marker-less AR system is using human's body such as hand or face for replacing traditional fiducial markers. This paper introduces a real-time method to manipulate the overlaid virtual objects dynamically in a marker-less AR system using both hands with a single camera. The left bare hand is considered as a virtual marker in the marker-less AR system and the right hand is used as a hand mouse. To build the marker-less system, we utilize a skin-color model for hand shape detection and curvature-based fingertip detection from an input video image. Using the detected fingertips the camera pose are estimated to overlay virtual objects on the hand coordinate system. In order to manipulate the virtual objects rendered on the marker-less AR system dynamically, a vision-based hand control interface, which exploits the fingertip tracking for the movement of the objects and pattern matching for the hand command initiation, is developed. From the experiments, we can prove that the proposed and developed system can control the objects dynamically in a convenient fashion.

Occlusion-Robust Marker-Based Augmented Reality Using Particle Swarm Optimization (파티클 집단 최적화를 이용한 가려짐에 강인한 마커 기반 증강현실)

  • Park, Hanhoon;Choi, Junyeong;Moon, Kwang-Seok
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.39-45
    • /
    • 2016
  • Effective and efficient estimation of camera poses is a core method in implementing augmented reality systems or applications. The most common one is using markers, e.g., ARToolkit. However, use of markers suffers from a notorious problem that is vulnerable to occlusion. To overcome this, this paper proposes a top-down method that iteratively estimates the current camera pose by using particle swarm optimization. Through experiments, it was confirmed that the proposed method enables to implement augmented reality on severely-occluded markers.

Resolving Grammatical Marking Ambiguities of Korean: An Eye-tracking Study (안구운동 추적을 통한 한국어 중의성 해소과정 연구)

  • Kim Youngjin
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.4
    • /
    • pp.49-59
    • /
    • 2004
  • An eye-tracking experiment was conducted to examine resolving processes of grammatical marking ambiguities of Korean. and to evaluate predictions from the garden-path model and the constraint-based models on the processing of Korean morphological information. The complex NP clause structure that can be parsed according to the minimal attachment principle was compared to the embedded relative clause structures that have one of the nominative marker (-ka), the delimiter (-man, which roughly corresponds to the English word 'only'), or the topic marker (-nun) on the first NPs. The results clearly showed that Korean marking ambiguities are resolved by the minimal attachment principle, and the topic marker affects reparsing procedures. The pattern of eye fixation times was more compatible with the garden-path model, and was not consistent with the predictions of the constraint-based accounts. Suggestions for further studies were made.

  • PDF

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

Improved Tracking System and Realistic Drawing for Real-Time Water-Based Sign Pen (향상된 트래킹 시스템과 실시간 수성 사인펜을 위한 사실적 드로잉)

  • Hur, Hyejung;Lee, Ju-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.125-132
    • /
    • 2014
  • In this paper, we present marker-less fingertip and brush tracking system with inexpensive web camera. Parallel computation using CUDA is applied to the tracking system. This tracking system can run on inexpensive environment such as a laptop or a desktop and support for real-time application. We also present realistic water-based sign pen drawing model and implementation. The realistic drawing application with our inexpensive real-time fingertip and brush tracking system shows us the art class of the future. The realistic drawing application, along with our inexpensive real-time fingertip and brush tracking system, would be utilized in test-bed for the future high-technology education environment.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

Measurement and Algorithm Calculation of Maxillary Positioning Change by Use of an Optoelectronic Tracking System Marker in Orthognathic Surgery (악교정수술에서 광전자 포인트 마커를 이용한 상악골 위치 변화의 계측 및 계산 방법 연구)

  • Park, Jong-Woong;Kim, Soung-Min;Eo, Mi-Young;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.33 no.3
    • /
    • pp.233-240
    • /
    • 2011
  • Purpose: To apply a computer assisted navigation system to orthognathic surgery, a simple and efficient measuring algorithm calculation based on affine transformation was designed. A method of improving accuracy and reducing errors in orthognathic surgery by use of an optical tracking camera was studied. Methods: A total of 5 points on one surgical splint were measured and tracked by the Polaris $Vicra^{(R)}$ (Northern Digital Inc Co., Ontario, Canada) optical tracking system in two cases. The first case was to apply the transformation matrix at pre- and postoperative situations, and the second case was to apply an affine transformation only after the postoperative situation. In each situation, the predictive measuring value was changed to the final measuring value via an affine transformation algorithm and the expected coordinates calculated from the model were compared with those of the patient in the operation room. Results: The mean measuring error was $1.027{\pm}0.587$ using the affine transformation at pre- and postoperative situations and the average value after the postoperative situation was $0.928{\pm}0.549$. The farther a coordinate region was from the reference coordinates which constitutes the transform matrixes, the bigger the measuring error was found which was calculated from an affine transformation algorithm. Conclusion: Most difference errors were brought from mainly measuring process and lack of reproducibility, the affine transformation algorithm formula from postoperative measuring values by using of optic tracking system between those of model surgery and those of patient surgery can be selected as minimizing the difference error. To reduce coordinate calculation errors, minimum transformation matrices must be used and reference points which determine an affine transformation must be close to the area where coordinates are measured and calculated, as well as the reference points need to be scattered.