• Title/Summary/Keyword: Image 2D to 3D Model

Search Result 509, Processing Time 0.034 seconds

Study on the 3D Modeling Data Conversion Algorithm from 2D Images (2D 이미지에서 3D 모델링 데이터 변환 알고리즘에 관한 연구)

  • Choi, Tea Jun;Lee, Hee Man;Kim, Eung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.479-486
    • /
    • 2016
  • In this paper, the algorithm which can convert a 2D image into a 3D Model will be discussed. The 2D picture drawn by a user is scanned for image processing. The Canny algorithm is employed to find the contour. The waterfront algorithm is proposed to find foreground image area. The foreground area is segmented to decompose the complex shapes into simple shapes. Then, simple segmented foreground image is converted into 3D model to become a complex 3D model. The 3D conversion formular used in this paper is also discussed. The generated 3D model data will be useful for 3D animation and other 3D contents creation.

A Research on AI Generated 2D Image to 3D Modeling Technology

  • Ke Ma;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.81-86
    • /
    • 2024
  • Advancements in generative AI are reshaping graphic and 3D content design landscapes, where AI not only enriches graphic design but extends its reach to 3D content creation. Though 3D texture mapping through AI is advancing, AI-generated 3D modeling technology in this realm remains nascent. This paper presents AI 2D image-driven 3D modeling techniques, assessing their viability in 3D content design by scrutinizing various algorithms. Initially, four OBJ model-exporting AI algorithms are screened, and two are further evaluated. Results indicate that while AI-generated 3D models may not be directly usable, they effectively capture reference object structures, offering substantial time savings and enhanced design efficiency through manual refinements. This endeavor pioneers new avenues for 3D content creators, anticipating a dynamic fusion of AI and 3D design.

A 3D Face Generation Method using Single Frontal Face Image for Game Users (단일 정면 얼굴 영상을 이용한 게임 사용자의 3차원 얼굴 생성 방법)

  • Jeong, Min-Yi;Lee, Sung-Joo;Park, Kang-Ryong;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1013-1014
    • /
    • 2008
  • In this paper, we propose a new method of generating 3D face by using single frontal face image and 3D generic face model. By using active appearance model (AAM), the control points among facial feature points were localized in the 2D input face image. Then, the transform parameters of 3D generic face model were found to minimize the error between the 2D control points and the corresponding 2D points projected from 3D facial model. Finally, by using the obtained model parameters, 3D face was generated. We applied this 3D face to 3D game framework and found that the proposed method could make a realistic 3D face of game user.

  • PDF

Robust Watermarking Algorithm for 3D Mesh Models (3차원 메쉬 모델을 위한 강인한 워터마킹 기법)

  • 송한새;조남익;김종원
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.64-73
    • /
    • 2004
  • A robust watermarking algorithm is proposed for 3D mesh models. Watermark is inserted into the 2D image which is extracted from the target 3D model. Each Pixel value of the extracted 2D image represents a distance from the predefined reference points to the face of the given 3D model. This extracted image is defined as “range image” in this paper. Watermark is embedded into the range image. Then, watermarked 3D mesh is obtained by modifying vertices using the watermarked range Image. In extraction procedure, the original model is needed. After registration between the original and the watermarked models, two range images are extracted from each 3D model. From these images. embedded watermark is extracted. Experimental results show that the proposed algorithm is robust against the attacks such as rotation, translation, uniform scaling, mesh simplification, AWGN and quantization of vertex coordinates.

Geocoding of the Free Stereo Mosaic Image Generated from Video Sequences (비디오 프레임 영상으로부터 제작된 자유 입체 모자이크 영상의 실좌표 등록)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Kim, Jung-Sub;Koh, Jin-Woo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.3
    • /
    • pp.249-255
    • /
    • 2011
  • The free-stereo mosaics image without GPS/INS and ground control data can be generated by using relative orientation parameters on the 3D model coordinate system. Its origin is located in one reference frame image. A 3D coordinate calculated by conjugate points on the free-stereo mosaic images is represented on the 3D model coordinate system. For determining 3D coordinate on the 3D absolute coordinate system utilizing conjugate points on the free-stereo mosaic images, transformation methodology is required for transforming 3D model coordinate into 3D absolute coordinate. Generally, the 3D similarity transformation is used for transforming each other 3D coordinates. Error of 3D model coordinates used in the free-stereo mosaic images is non-linearly increased according to distance from 3D model coordinate and origin point. For this reason, 3D model coordinates used in the free-stereo mosaic images are difficult to transform into 3D absolute coordinates by using linear transformation. Therefore, methodology for transforming nonlinear 3D model coordinate into 3D absolute coordinate is needed. Also methodology for resampling the free-stereo mosaic image to the geo-stereo mosaic image is needed for overlapping digital map on absolute coordinate and stereo mosaic images. In this paper, we propose a 3D non-linear transformation for converting 3D model coordinate in the free-stereo mosaic image to 3D absolute coordinate, and a 2D non-linear transformation based on 3D non-linear transformation converting the free-stereo mosaic image to the geo-stereo mosaic image.

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

자가 치아 이식술에 사용되는 Computer Aided Rapid Prototyping model(CARP model)의 실제 치아에 대한 오차

  • Lee, Seong-Jae;Kim, Ui-Seong;Kim, Gi-Deok;Lee, Seung-Jong
    • The Journal of the Korean dental association
    • /
    • v.44 no.2 s.441
    • /
    • pp.115-122
    • /
    • 2006
  • Objective : The purpose of this study was to evaluate the dimensional errors between real tooth, 3D CT image and CARP model. Materials and Methods : Two maxilla and two mandible block bones with intact teeth were taken from two cadavers. Computed tomography was taken either in dry state and in wet state. After then, all teeth were extracted and the dimensions of the real teeth were measured using a digital caliper at mesio-distal and bucco-lingual width both in crown and cervical portion. 3D CT image was generated using the V-works $4.0^{TM}$ (Cybemed Inc., Seoul, Korea) software. Twelve teeth were randomly selected for CARP model fabrication. All the measurements of 3D Ct images and CARP models were made in the same manner of the real tooth group. Dimensional errors between real tooth, 3D CT image model and CARP model was calculated. Results : 1) Average of absolute error was 0.199 mm between real teeth and 3D CT image model, 0.169 mm between 3D CT image model and CARP model and 0.291 mm between real teeth and CARP model, respectively. 2) Average size of 3D CT image was smaller than real teeth by 0.149 mm and that of CARP model was smalier than 3D CT image model by 0.067mm. Conclusion : Within the scope of this study, CARP model with the 0.291 mm average of absolute eror can aid to enhance the success rate cf autogenous tooth transplantation due to the increased accuracy of recipient bone and donor tooth.

  • PDF

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.

3D Object Modeling and Feature Points using Octree Model (8진트리 모델을 사용한 3D 물체 모델링과 특징점)

  • 이영재
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.599-607
    • /
    • 2002
  • The octree model, a hierarchical volume description of 3D objects, nay be utilized to generate projected images from arbitrary viewing directions, thereby providing an efficient means of the data base for 3D object recognition and other applications. We present 2D projected image and made pseudo gray image of object using octree model and multi level boundary search algorithm. We present algorithm for finding feature points of 2D and 3D image and finding matched points using geometric transformation. The algorithm is made of data base, it will be widely applied to 3D object modeling and efficient feature points application for basic 3D object research.

  • PDF

3D Reconstruction of a Single Clothing Image and Its Application to Image-based Virtual Try-On (의상 이미지의 3차원 의상 복원 방법과 가상착용 응용)

  • Ahn, Heejune;Minar, Matiur Rahman
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.1-11
    • /
    • 2020
  • Image-based virtual try-on (VTON) is becoming popular for online apparel shopping, mainly because of not requiring 3D information for try-on clothes and target humans. However, existing 2D algorithms, even when utilizing advanced non-rigid deformation algorithms, cannot handle large spatial transformations for complex target human poses. In this study, we propose a 3D clothing reconstruction method using a 3D human body model. The resulting 3D models of try-on clothes can be more easily deformed when applied to rest posed standard human models. Then, the poses and shapes of 3D clothing models can be transferred to the target human models estimated from 2D images. Finally, the deformed clothing models can be rendered and blended with target human representations. Experimental results with the VITON dataset used in the previous works show that the shapes of reconstructed clothing are significantly more natural, compared to the 2D image-based deformation results when human poses and shapes are estimated accurately.