• Title/Summary/Keyword: Multiview images

Search Result 53, Processing Time 0.017 seconds

Model-Based Three-dimensional Multiview Object Implementation by OpenGL (OpenGL을 이용한 모델 기반 3차원 다시점 객체 구현)

  • Oh, Won-Sik;Kim, Dong-Uk;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.299-309
    • /
    • 2008
  • In this paper, we propose an algorithm for object generation from model-based 3-dimensional multi-viewpoint images using OpenGL rendering. In the first step, we preprocess a depth map image in order to get a three-dimensional coordinate which is sampled as a vertex information on OpenGL and has a z-value as depth information. Next, the Delaunay Triangulation algorithm is used to construct a polygon for texture-mapping using the vertex information. Finally, by mapping a texture image on the constructed polygon, we generate a viewpoint-adaptive object by calculating 3-dimensional coordinates on OpenGL.

Object VR-based 2.5D Virtual Textile Wearing System : Viewpoint Vector Estimation and Textile Texture Mapping (오브젝트 VR 기반 2.5D 가상 직물 착의 시스템 : 시점 벡터 추정 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.19-26
    • /
    • 2008
  • This paper is related to a new technology allowing a user to have a 360 degree viewpoint of the virtual wearing object, and to an object VR(Virtual Reality)-based 2D virtual textile wearing system using viewpoint vector estimation and textile texture mapping. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multiview 2D images of clothes model for object VR, and three-dimensionally viewing its virtual wearing appearance at a 360 degree viewpoint of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi -automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

  • PDF

Robust Semi-auto Calibration Method for Various Cameras and Illumination Changes (다양한 카메라와 조명의 변화에 강건한 반자동 카메라 캘리브레이션 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.36-42
    • /
    • 2016
  • Recently, many 3D contents have been produced through the multiview camera system. In this system, since a difference of the viewpoint between color and depth cameras is inevitable, the camera parameter plays the important role to adjust the viewpoint as a preprocessing step. The conventional camera calibration method is inconvenient to users since we need to choose pattern features manually after capturing a planar chessboard with various poses. Therefore, we propose a semi-auto camera calibration method using a circular sampling and an homography estimation. Firstly, The proposed method extracts the candidates of the pattern features from the images by FAST corner detector. Next, we reduce the amount of the candidates by the circular sampling and obtain the complete point cloud by the homography estimation. Lastly, we compute the accurate position having the sub-pixel accuracy of the pattern features by the approximation of the hyper parabola surface. We investigated which factor affects the result of the pattern feature detection at each step. Compared to the conventional method, we found the proposed method released the inconvenience of the manual operation but maintained the accuracy of the camera parameters.