• Title/Summary/Keyword: Orthographic-Projection

Search Result 16, Processing Time 0.019 seconds

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

View Variations and Recognition of 2-D Objects (화상에서의 각도 변화를 이용한 3차원 물체 인식)

  • Whangbo, Taeg-Keun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2840-2848
    • /
    • 1997
  • Recognition of 3D objects using computer vision is complicated by the fact that geometric features vary with view orientation. An important factor in designing recognition algorithms in such situations is understanding the variation of certain critical features. The features selected in this paper are the angles between landmarks in a scene. In a class of polyhedral objects the angles at certain vertices may form a distinct and characteristic alignment of faces. For many other classes of objects it may be possible to identify distinctive spacial arrangements of some readily identifiable landmarks. In this paper given an isotropic view orientation and an orthographic projection the two dimensional joint density function of two angles in a scene is derived. Also the joint density of all defining angles of a polygon in an image is derived. The analytic expressions for the densities are useful in determining statistical decision rules to recognize surfaces and objects. Experiments to evaluate the usefulness of the proposed methods are reported. Results indicate that the method is useful and powerful.

  • PDF

Ortho-rectification of Satellite-based Linear Pushbroom-type CCD Camera Images (선형 CCD카메라 영상의 정사투영 알고리즘 개발)

  • 곽성희;이영란;신동석
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.1
    • /
    • pp.31-38
    • /
    • 1999
  • In this paper, we introduce an algorithm for the ortho-rectification of high resolution pushbroom-type satellite images. The generation of ortho-images in the ultimate level of the satellite image preprocessing which also includes systematic geocoding and precision geocoding. It is also essential for the mapping of satellite images because topotraphic maps are based on the orthographic projection. The newly developed ortho-image generation algorithm introduced in this paper is on the line of the algorithms previously developed (Shin and Lee, 1997; Shin e 1998). Various experimental results are shown in this paper. The results show that the algorithm completely eliminates the disparities in the perspectively viewed images which were caused by the terrain height. The absolute accuracy of the developed algorithm depends on the accuracy of the camera model and the digital elevation model used.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

3DTIP: 3D Stereoscopic Tour-Into-Picture of Korean Traditional Paintings (3DTIP: 한국 고전화의 3차원 입체 Tour-Into-Picture)

  • Jo, Cheol-Yong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.616-624
    • /
    • 2009
  • This paper presents a 3D stereoscopic TIP (Tour Into Picture) for Korean classical paintings being composed of persons, boat, and landscape. Unlike conventional TIP methods providing 2D image or video, our proposed TIP can provide users with 3D stereoscopic contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method firstly makes input data being composed of foreground mask, background image, and depth map. The second step is to navigate the picture and to obtain rendered images by orthographic or perspective projection. Then, two depth enhancement schemes such as depth template and Laws depth are utilized in order to reduce a cardboard effect and thus to enhance 3D perceived depth of the foreground objects. In experiments, the proposed method was tested on 'Danopungjun' and 'Muyigido' that are famous paintings made in Chosun Dynasty. The stereoscopic animation was proved to deliver new 3D perception compared with 2D video.

Two-dimensional Automatic Transformation Template Matching for Image Recognition (영상 인식을 위한 2차원 자동 변형 템플릿 매칭)

  • Han, Young-Mo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.1-6
    • /
    • 2019
  • One method for image recognition is template matching. In conventional template matching, the block matching algorithm (BMA) is performed while changing the two-dimensional translational displacement of the template within a given matching image. The template size and shape do not change during the BMA. Since only two-dimensional translational displacement is considered, the success rate decreases if the size and direction of the object do not match in the template and the matching image. In this paper, a variable is added to adjust the two-dimensional direction and size of the template, and the optimal value of the variable is automatically calculated in the block corresponding to each two-dimensional translational displacement. Using the calculated optimal value, the template is automatically transformed into an optimal template for each block. The matching error value of each block is then calculated based on the automatically deformed template. Therefore, a more stable result can be obtained for the difference in direction and size. For ease of use, this study focuses on designing the algorithm in a closed form that does not require additional information beyond the template image, such as distance information.