• 제목/요약/키워드: Orthographic Image

검색결과 21건 처리시간 0.031초

Hologram Generation of 3D Objects Using Multiple Orthographic View Images

  • Kim, Min-Su;Baasantseren, Ganbat;Kim, Nam;Park, Jae-Hyeung
    • Journal of the Optical Society of Korea
    • /
    • 제12권4호
    • /
    • pp.269-274
    • /
    • 2008
  • We propose a new synthesis method for the hologram of 3D objects using incoherent multiple orthographic view images. The 3D objects are captured and their multiple orthographic view images are generated from the captured image. Each orthographic view image is numerically overridden by the plane wave propagating in the direction of the corresponding view angle and integrated to form a point in the hologram plane. By repeating this process for all orthographic view images, we can generate the Fourier hologram of the 3D objects.

An Efficient Perspective Projection using $\textrm{VolumePro}^{TM}$ Hardware (볼륨프로 하드웨어를 이용한 효율적인 투시투영 방법)

  • 임석현;신병석
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제31권3_4호
    • /
    • pp.195-203
    • /
    • 2004
  • VolumePro is a real-time volume rendering hardware for consumer PCs. However it cannot be used for the applications requiring perspective projection such as virtual endoscopy since it provides only orthographic projection. Several methods have been presented to approximate perspective projection by decomposing a volume into slabs and applying successive parallel projection to thou. But it takes a lot of time since the entire region of every slab should be processed, which does not contribute to final image. In this paper, we propose an efficient perspective projection method that makes the use of several sub-volumes with cropping feature of VolumePro. It reduces the rendering time in comparison to slab-based method without image quality deterioration since it processes only the parts contained in the view frustum.

Recovering Surface Orientation from Texture Gradient by Monoculer View (단안시에 의한 무늬그래디언트로부터 연 방향 복구)

  • 정성칠;최연성;최종수
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 한국통신학회 1987년도 춘계학술발표회 논문집
    • /
    • pp.22-26
    • /
    • 1987
  • Texture provides an important acurce of information about the threedicensfornarry information of visible surface particulary for stationary conccular views. To recover three dicmensinoary information, the distorging effects of pro jection must be distinguished from properties of the texture on which the distrortion acts. In this paper, we show an approximated maximum likelihood estimation method by which we find surface oriemtation of the visible surface in gaussian sphere using local analysis of the texture, In addition assuming that an orthographic projection and a circle is an image formation system and a texel(texture element)respectively we derive the surface orientation from the distribution of variation by means of orthographic pro jemction of a tangent directon which exstis regulary in the are length of a circle we present the orientation parameters of textured surface with saint and tilt and also the surface normal of the resvlted surface orimentation as needle map. This algorithm was applied to geograghic contour and synthetic textures.

  • PDF

Automatic Estimation of Spatially Varying Focal Length for Correcting Distortion in Fisheye Lens Images

  • Kim, Hyungtae;Kim, Daehee;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제2권6호
    • /
    • pp.339-344
    • /
    • 2013
  • This paper presents an automatic focal length estimation method to correct the fisheye lens distortion in a spatially adaptive manner. The proposed method estimates the focal length of the fisheye lens by generating two reference focal lengths. The distorted fisheye lens image is finally corrected using the orthographic projection model. The experimental results showed that the proposed focal length estimation method is more accurate than existing methods in terms of the loss rate.

  • PDF

3-D shape and motion recovery using SVD from image sequence (동영상으로부터 3차원 물체의 모양과 움직임 복원)

  • 정병오;김병곤;고한석
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • 제35S권3호
    • /
    • pp.176-184
    • /
    • 1998
  • We present a sequential factorization method using singular value decomposition (SVD) for recovering both the three-dimensional shape of an object and the motion of camera from a sequence of images. We employ paraperpective projection [6] for camera model to handle significant translational motion toward the camera or across the image. The proposed mthod not only quickly gives robust and accurate results, but also provides results at each frame becauseit is a sequential method. These properties make our method practically applicable to real time applications. Considerable research has been devoted to the problem of recovering motion and shape of object from image [2] [3] [4] [5] [6] [7] [8] [9]. Among many different approaches, we adopt a factorization method using SVD because of its robustness and computational efficiency. The factorization method based on batch-type computation, originally proposed by Tomasi and Kanade [1] proposed the feature trajectory information using singular value decomposition (SVD). Morita and Kanade [10] have extenened [1] to asequential type solution. However, Both methods used an orthographic projection and they cannot be applied to image sequences containing significant translational motion toward the camera or across the image. Poleman and Kanade [11] have developed a batch-type factorization method using paraperspective camera model is a sueful technique, the method cannot be employed for real-time applications because it is based on batch-type computation. This work presents a sequential factorization methodusing SVD for paraperspective projection. Initial experimental results show that the performance of our method is almost equivalent to that of [11] although it is sequential.

  • PDF

View Variations and Recognition of 2-D Objects (화상에서의 각도 변화를 이용한 3차원 물체 인식)

  • Whangbo, Taeg-Keun
    • The Transactions of the Korea Information Processing Society
    • /
    • 제4권11호
    • /
    • pp.2840-2848
    • /
    • 1997
  • Recognition of 3D objects using computer vision is complicated by the fact that geometric features vary with view orientation. An important factor in designing recognition algorithms in such situations is understanding the variation of certain critical features. The features selected in this paper are the angles between landmarks in a scene. In a class of polyhedral objects the angles at certain vertices may form a distinct and characteristic alignment of faces. For many other classes of objects it may be possible to identify distinctive spacial arrangements of some readily identifiable landmarks. In this paper given an isotropic view orientation and an orthographic projection the two dimensional joint density function of two angles in a scene is derived. Also the joint density of all defining angles of a polygon in an image is derived. The analytic expressions for the densities are useful in determining statistical decision rules to recognize surfaces and objects. Experiments to evaluate the usefulness of the proposed methods are reported. Results indicate that the method is useful and powerful.

  • PDF

Two-dimensional Automatic Transformation Template Matching for Image Recognition (영상 인식을 위한 2차원 자동 변형 템플릿 매칭)

  • Han, Young-Mo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • 제20권9호
    • /
    • pp.1-6
    • /
    • 2019
  • One method for image recognition is template matching. In conventional template matching, the block matching algorithm (BMA) is performed while changing the two-dimensional translational displacement of the template within a given matching image. The template size and shape do not change during the BMA. Since only two-dimensional translational displacement is considered, the success rate decreases if the size and direction of the object do not match in the template and the matching image. In this paper, a variable is added to adjust the two-dimensional direction and size of the template, and the optimal value of the variable is automatically calculated in the block corresponding to each two-dimensional translational displacement. Using the calculated optimal value, the template is automatically transformed into an optimal template for each block. The matching error value of each block is then calculated based on the automatically deformed template. Therefore, a more stable result can be obtained for the difference in direction and size. For ease of use, this study focuses on designing the algorithm in a closed form that does not require additional information beyond the template image, such as distance information.

Video-Based Augmented Reality without Euclidean Camera Calibration (유클리드 카메라 보정을 하지 않는 비디오 기반 증강현실)

  • Seo, Yong-Deuk
    • Journal of the Korea Computer Graphics Society
    • /
    • 제9권3호
    • /
    • pp.15-21
    • /
    • 2003
  • An algorithm is developed for augmenting a real video with virtual graphics objects without computing Euclidean information. Real motion of the camera is obtained in affine space by a direct linear method using image matches. Then, virtual camera is provided by determining the locations of four basis points in two input images as initialization process. The four pairs of 2D location and its 3D affine coordinates provide Euclidean orthographic projection camera through the whole video sequence. Our method has the capability of generating views of objects shaded by virtual light sources, because we can make use of all the functions of the graphics library written on the basis of Euclidean geometry. Our novel formulation and experimental results with real video sequences are presented.

  • PDF

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 한국방송공학회 1998년도 Proceedings of International Workshop on Advanced Image Technology
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Evaluation of Geospatial Information Construction Characteristics and Usability According to Type and Sensor of Unmanned Aerial Vehicle (무인항공기 종류 및 센서에 따른 공간정보 구축의 활용성 평가)

  • Chang, Si Hoon;Yun, Hee Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • 제39권6호
    • /
    • pp.555-562
    • /
    • 2021
  • Recently, in the field of geospatial information construction, unmanned aerial vehicles have been increasingly used because they enable rapid data acquisition and utilization. In this study, photogrammetry was performed using fixed-wing, rotary-wing, and VTOL (Vertical Take-Off and Landing) unmanned aerial vehicles, and geospatial information was constructed using two types of unmanned aerial vehicle LiDAR (Light Detection And Ranging) sensors. In addition, the accuracy was evaluated to present the utility of spatial information constructed through unmanned aerial photogrammetry and LiDAR. As a result of the accuracy evaluation, the orthographic image constructed through unmanned aerial photogrammetry showed accuracy within 2 cm. Considering that the GSD (Ground Sample Distance) of the constructed orthographic image is about 2 cm, the accuracy of the unmanned aerial photogrammetry results is judged to be within the GSD. The spatial information constructed through the unmanned aerial vehicle LiDAR showed accuracy within 6 cm in the height direction, and data on the ground was obtained in the vegetation area. DEM (Digital Elevation Model) using LiDAR data will be able to be used in various ways, such as construction work, urban planning, disaster prevention, and topographic analysis.