• Title/Summary/Keyword: 2차원 영상의 3차원 변환

Search Result 263, Processing Time 0.032 seconds

2차원 영상으로부터 3차원 영상을 모델링하는 기술 동향

  • Jo, Hyeong-Rae;Park, Gu-Man
    • Broadcasting and Media Magazine
    • /
    • v.26 no.4
    • /
    • pp.23-39
    • /
    • 2021
  • 2차원 영상을 3차원 모델 영상으로 변환하는 방식이 다양하게 발전해오고 있다. 딥러닝의 발전 중 특히 GAN의 다양한 연구는 2차원 영상의 생성뿐만 아니라 다양한 3차원 영상의 생성에도 진전을 보였다. 본 고에서는 2차원 영상을 3차원 영상으로 변환하는 연구의 필요성을 바탕으로 관련 연구의 내용과 동향을 분석하였다. 주요 내용으로는 딥러닝 기반의 3차원 객체인식, 2D로부터 3D 변환을 위한 신경망에 대한 연구, 생성적 기법을 적용한 연구, 3D 모델링 도구 등이 포함된다. 관련 연구의 전반적인 흐름을 고려했을 때 향후 3D 모델링의 정교한 표현력 향상, 고속의 고해상도 렌더링, 편리한 온라인 접근성 등을 예상하게 된다. 관련 산업 종사자들에게는 생성시간의 단축을 가져올 수 있고 일반인은 전문적인 3D 기술이 없어도 우수한 3D 모델을 생성하고 활용할 수 있을 것으로 기대한다.

A Study on the space analysis algorithm for 3D TV image conversion (TV영상의 3차원 변환을 위한 공간분석 알고리즘에 관한 연구)

  • 신강호;김계국
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.4
    • /
    • pp.121-126
    • /
    • 2002
  • The stereoscopic image is that we can see it closer than a real thing compared to 2D image, and it has influence on human's vision information because it is more natural method to feel connections between the spaces of the image and himself. There are several method convert from 2d image to 3d image. But, in this paper, we are propose the image separate algorithm of continuous input system through a spatial analysis, not be done with 2D still image. Additionally, we will adapt to the moving vector which has been used in MPEG. In this experiment, we obtained the effect of 3D image.

  • PDF

Feature Point Matching Technique using Adjustment of Distortion between Correlation Windows (상관 윈도우사이의 왜곡을 보정한 특징점 정합 기법)

  • Ha, Seung-Tae;Han, Jun-Hui
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.5
    • /
    • pp.440-447
    • /
    • 2001
  • 본 논문은 영상과 연관된 3차원 정보로부터 초기 3차원 변환을 유추, 상관윈도우를 변환시켜 정합에 이용하는 새로운 정합기법을 제안한다. 즉, 초기 스테레오 정합 등을 통한 3차원 정보를 추출하고, 인위적인 초기 특징점의 대응을 통해 3차원 변환을 얻으며, 이를 이용해 상관 윈도우의 3차원 변환을 가능하게 한다. 상관 윈도우의 3차원 변환은 기존의 방법이 가지는 영상 흐름의 2차원적인 제한을 이용한 정합방법에 비해 실제 카메라의 변환 유추에 합당하다. 또한 3차원 변환을 통해 정합 대상 점의 탐색범위를 최소화하고 정합의 결과에 신뢰성을 더한다. 실험에서는 다양한 영상의 정합 결과와 기존 방법과의 상관 계수 비교를 통해 본 논문이 제안하는 정합방법의 우월성을 보인다.

  • PDF

Geocoding of the Free Stereo Mosaic Image Generated from Video Sequences (비디오 프레임 영상으로부터 제작된 자유 입체 모자이크 영상의 실좌표 등록)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Kim, Jung-Sub;Koh, Jin-Woo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.3
    • /
    • pp.249-255
    • /
    • 2011
  • The free-stereo mosaics image without GPS/INS and ground control data can be generated by using relative orientation parameters on the 3D model coordinate system. Its origin is located in one reference frame image. A 3D coordinate calculated by conjugate points on the free-stereo mosaic images is represented on the 3D model coordinate system. For determining 3D coordinate on the 3D absolute coordinate system utilizing conjugate points on the free-stereo mosaic images, transformation methodology is required for transforming 3D model coordinate into 3D absolute coordinate. Generally, the 3D similarity transformation is used for transforming each other 3D coordinates. Error of 3D model coordinates used in the free-stereo mosaic images is non-linearly increased according to distance from 3D model coordinate and origin point. For this reason, 3D model coordinates used in the free-stereo mosaic images are difficult to transform into 3D absolute coordinates by using linear transformation. Therefore, methodology for transforming nonlinear 3D model coordinate into 3D absolute coordinate is needed. Also methodology for resampling the free-stereo mosaic image to the geo-stereo mosaic image is needed for overlapping digital map on absolute coordinate and stereo mosaic images. In this paper, we propose a 3D non-linear transformation for converting 3D model coordinate in the free-stereo mosaic image to 3D absolute coordinate, and a 2D non-linear transformation based on 3D non-linear transformation converting the free-stereo mosaic image to the geo-stereo mosaic image.

3D Library Platform Construction using Drone Images and its Application to Kangwha Dolmen (드론 촬영 영상을 활용한 3D 라이브러리 플랫폼 구축 및 강화지석묘에의 적용)

  • Kim, Kyoung-Ho;Kim, Min-Jung;Lee, Jeongjin
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.199-215
    • /
    • 2017
  • Recently, a drone is used for the general purpose application although the drone was builtfor the military purpose. A drone is actively used for the creation of contents, and an image acquisition. In this paper, we develop a 3D library module platform using 3D mesh model data, which is generated by a drone image and its point cloud. First, a lot of 2D image data are taken by a drone, and a point cloud data is generated from 2D drone images. A 3D mesh data is acquired from point cloud data. Then, we develop a service library platform using a transformed 3D data for multi-purpose uses. Our platform with 3D data can minimize the cost and time of contents creation for special effects during the production of a movie, drama, or documentary. Our platform can contribute the creation of experts for the digital contents production in the field of a realistic media, a special image, and exhibitions.

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

3D/2D convertible color display based on modified integral imaging (집적 영상에 기반한 2차원 3차원 변환 가능 컬러 디스플레이)

  • Kim, Yun-Hee;Cho, Seong-Woo;Lee, Byoung-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2006.07c
    • /
    • pp.1605-1606
    • /
    • 2006
  • 본 논문에서는 집적 영상에 기반한 2차원/3차원 변환 가능한 디스플레이에서 기초 영상을 표시하는 투과형 디스플레이 소자로서 기존의 광변조기 대신 LCD (liquid crystal display) 패널을 사용하여 컬러 디스플레이를 구현하는 방법을 제안한다. 본 논문에서는 제안된 방법의 원리를 설명하고 실험 결과를 보여주도록 하겠다. 또한 예상되는 색 분산 문제점에 대하여 살펴보고 이의 원인을 분석하고 해결 방법을 제안하여 2차원/3차원 변환 가능한 컬러 디스플레이를 구현하는 방법에 대하여 논하도록 하겠다.

  • PDF

Fast Geometric Transformations of 3D Images Represented by an Octree (8진트리로 표현된 3차원 영상의 빠른 기학학적 변환)

  • Heo, Yeong-Nam;Park, Seung-Jin;Kim, Eung-Gon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.831-838
    • /
    • 1995
  • Geometric transformations require many operations in displaying moving 3D objects on the screen and a fast computation is a important problem in CAD or animation applications. The general method to compute the transformation coordinates of an object represented by an octree must perform the operations on every node. This paper proposes an efficient method that computes the rectangular coordinates of the vertices of the octree nodes into the coordinates of the universe space using the basicvectors in order to compute quickly geometric transformations of 3D images represented by an octree. The coordinates of the vertices of each octant are computed by using the formula presented here, which requies additions and multiplications by powers of 2. This method has a very fast execution time and is compared with the general computation method.

  • PDF

Convert 2D Video Frames into 3D Video Frames (2차원 동영상의 3차원 동영상 변화)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.117-123
    • /
    • 2009
  • In this paper, An algorithm which converts 2D video frames into 3D video frames of parallel looking stereo camea is proposed. The proposed algorithm finds the disparity information between two consecutive video frames and generates 3D video frames from the obtained disparity maps. The disparity information is obtained from the modified iterative convergence algorithm. The method of generating 3D video frames from the disparity information is also proposed. The proposed algorithm uses coherence method which overcomes the video pattern based algorithms.

Three-Dimensional Conversion of Two-Dimensional Movie Using Optical Flow and Normalized Cut (Optical Flow와 Normalized Cut을 이용한 2차원 동영상의 3차원 동영상 변환)

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Joo-Hwan;Kang, Jin-Mo;Lee, Byoung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.1
    • /
    • pp.16-22
    • /
    • 2009
  • We propose a method to convert a two-dimensional movie to a three-dimensional movie using normalized cut and optical flow. In this paper, we segment an image of a two-dimensional movie to objects first, and then estimate the depth of each object. Normalized cut is one of the image segmentation algorithms. For improving speed and accuracy of normalized cut, we used a watershed algorithm and a weight function using optical flow. We estimate the depth of objects which are segmented by improved normalized cut using optical flow. Ordinal depth is estimated by the change of the segmented object label in an occluded region which is the difference of absolute values of optical flow. For compensating ordinal depth, we generate the relational depth which is the absolute value of optical flow as motion parallax. A final depth map is determined by multiplying ordinal depth by relational depth, then dividing by average optical flow. In this research, we propose the two-dimensional/three-dimensional movie conversion method which is applicable to all three-dimensional display devices and all two-dimensional movie formats. We present experimental results using sample two-dimensional movies.