• Title/Summary/Keyword: 3차원 동영상

Search Result 285, Processing Time 0.032 seconds

2D Image-Based Individual 3D Face Model Generation and Animation (2차원 영상 기반 3차원 개인 얼굴 모델 생성 및 애니메이션)

  • 김진우;고한석;김형곤;안상철
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.11b
    • /
    • pp.15-20
    • /
    • 1999
  • 본 논문에서는 사람의 정면 얼굴을 찍은 컬러 동영상에서 얼굴의 각 구성 요소에 대한 특징점들을 추출하여 3차원 개인 얼굴 모델을 생성하고 이를 얼굴의 표정 움직임에 따라 애니메이션 하는 방법을 제시한다. 제안된 방법은 얼굴의 정면만을 촬영하도록 고안된 헬멧형 카메라( Head-mounted camera)를 사용하여 얻은 2차원 동영상의 첫 프레임(frame)으로부터 얼굴의 특징점들을 추출하고 이들과 3차원 일반 얼굴 모델을 바탕으로 3차원 얼굴 특징점들의 좌표를 산출한다. 표정의 변화는 초기 영상의 특징점 위치와 이 후 영상들에서의 특징점 위치의 차이를 기반으로 알아낼 수 있다. 추출된 특징점 및 얼굴 움직임은 보다 다양한 응용 이 가능하도록 최근 1단계 표준이 마무리된 MPEG-4 SNHC의 FDP(Facial Definition Parameters)와FAP(Facial Animation Parameters)의 형식으로 표현되며 이를 이용하여 개인 얼굴 모델 및 애니메이션을 수행하였다. 제안된 방법은 단일 카메라로부터 촬영되는 영상을 기반으로 이루어지는 MPEG-4 기반 화상 통신이나 화상 회의 시스템 등에 유용하게 사용될 수 있다.

  • PDF

Mdlti-View Video Generation from 2 Dimensional Video (2차원 동영상으로부터 다시점 동영상 생성 기법)

  • Baek, Yun-Ki;Choi, Mi-Nam;Park, Se-Whan;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.1C
    • /
    • pp.53-61
    • /
    • 2008
  • In this paper, we propose an algorithm for generation of multi-view video from conventional 2 dimensional video. Color and motion information of an object are used for segmentation and from the segmented objects, multi-view video is generated. Especially, color information is used to extract the boundary of an object that is barely extracted by using motion information. To classify the homogeneous regions with color, luminance and chrominance components are used. A pixel-based motion estimation with a measurement window is also performed to obtain motion information. Then, we combine the results from motion estimation and color segmentation and consequently we obtain a depth information by assigning motion intensity value to each segmented region. Finally, we generate multi-view video by applying rotation transformation method to 2 dimensional input images and the obtained depth information in each object. The experimental results show that the proposed algorithm outperforms comparing with conventional conversion methods.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

3DTV System Adaptive to User's Environment (사용자 환경에 적응적인 3DTV 시스템)

  • Baek, Yun-Ki;Choi, Mi-Nam;Park, Se-Whan;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.982-989
    • /
    • 2007
  • In this paper, we propose a 3DTV system that considers user's view point and display environment. The proposed system consists of 3 parts - multi-view encoder/decoder, face-tracker, and 2D/3D converter. The proposed system try to encode multi-view sequence and decode it in accordance with the user's view point and it also gives a stereopsis to the multi-view image by using of 2D/3D conversion which converts decoded two-dimensional(2D) image to three-dimensional(3D) image. Experimental results shows that we are able to correctly reconstruct a stereoscopic view that is exactly corresponding to user's view point.

A technique for Auto find the way of 3-D spatial aviation images contents environment (3차원 공간 동영상 콘텐츠 환경에서의 자동 길 찾기 기법연구)

  • Yeon, Sang-Ho
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.417-420
    • /
    • 2006
  • Recently we could generation of 3-D simulation image by use of various image contents, so I tried advanced methods very easily leads to the location on the GIS environments. Its used basically air photos and satellite sensor images for them. For the generate 3-D spatial be suitable to matching map coordinates using elevation data from digital topographic files, and matching to 3D spatial image contents through perspectives view condition composed to move according to fixed roads until arrive to location. Through this new system which tourists are able to simulate the interest paths or locations and to visit the cultural inheritance was proposed by combining various spatial data with the multimedia contents. This system provides people with guidance to locate the cultural assets in the Web environments. The developed system which is more convenient to provide tourists with the information and they are able to access automatically to location easily. In the future, the visitors are able to use easily the 3d image contents on the Internet or from the public tour information desk by using the simulation images.

  • PDF

Design of 3D Stereoscopic Electronic Book Authoring Tool Based on DirectX (DirectX기반 3차원 입체 eBook 영상 및 이미지 저작 도구 설계)

  • Park, Jinwoo;Lee, Keunhyoung;Kim, Jinmo;Hwang, Soyoung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.171-173
    • /
    • 2015
  • This paper proposes a design method of an authoring tool for making 3D e-book using DirectX development tools. There are several functions such as generation and modification of 3D objects, modification of textures, stereoscopic modes and pictures, video export and so on in the proposed authoring tool. To support these functions, we proposes design scheme such as data structures for generating 3D objects, anaglyph method using color differences and video export method using BandiCap library.

  • PDF

Implementation of 3D Video using Time-Shortening Algorithm (시간단축 알고리즘을 통한 3D 동영상 구현)

  • Shin, Jin-Seob;Jeong, Chan-Woong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.123-128
    • /
    • 2020
  • In this paper, we presents a new cone beam computerized tomography (CBCT) system for the reconstruction of 3 dimensional dynamic images. The system using cone beam has less the exposure of radioactivity than fan beam, relatively. In the system, the reconstruction 3-D image is reconstructed with the radiation angle of X-ray in the image processing unit and transmitted to the monitor. And in the image processing unit, the Three Pass Shear Matrices, a kind of Rotation-based method, is applied to reconstruct 3D image because it has less transcendental functions than the one-pass shear matrix to decrease a time of calculations for the reconstruction 3-D image in the processor. The new system is able to get 3~5 3-D images a second, reconstruct the 3-D dynamic images in real time. And we showed the Rotation-based method was good rather than existing reconstruction technique for 3D images, also found weakness and a solution for it.

High Performance Reflection Effect Processing for Moving Pictures in 3 Dimensional Graphics (3차원 그래픽스의 동영상에 대한 반사 효과의 고속처리)

  • Lee, Seung-Hee;Lee, Keon-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.3
    • /
    • pp.444-449
    • /
    • 2009
  • With the advance of high performance computing hardware, many applications have been emerging which exploit real-time computer graphics capabilities. This paper is concerned with an effective realization method for reflection effect for the situations in which moving pictures are played in 3D computer graphics modeling world. The method determines in an geometric way the locations of the projection plan into which the playing areas of moving pictures are mapped, and then realizes the reflection effect with texture mapping. Compared with the traditional stencil buffer-based reflection method, the processing time of the proposed method does not significantly deteriorate for the models with moving pictures and reflection surfaces, and its throughput was improved by 30% at minimum and 127% at maximum for the models used in the comparative studies.

Compression of 3D color integral images using 2D referencing technique (2차원 참조 기법을 이용한 3D 컬러 집적 영상의 압축)

  • Kim, Jong-Ho;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.12
    • /
    • pp.2693-2700
    • /
    • 2009
  • This paper proposes an effective compression method to utilize the 3D integral image with large amount of data obtained by a lens array in various applications. The conventional compression methods for still images exhibit low performance in terms of coding efficiency and visual quality, since they cannot remove the correlation between elemental images. In the moving picture compression methods, 1D scanning techniques that produce a sequence of elemental images are not enough to remove the directional correlation between elemental images. The proposed method effectively sequences the elemental images from an integral image by the 2D referencing technique and compresses them using the multi-frame referencing of H.264/AVC. The proposed 2D referencing technique selects the optimal reference image according to vertical, horizontal, and diagonal correlation between elemental images. Experimental results show that compression with the sequence of elemental images presents better coding efficiency than that of still image compression. Moreover, the proposed 2D referencing technique is superior to the 1D scanning methods in terms of the objective performance and visual quality.

Feature-Based Light and Shadow Estimation for Video Compositing and Editing (동영상 합성 및 편집을 위한 특징점 기반 조명 및 그림자 추정)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Video-based modeling / rendering developed to produce photo-realistic video contents have been one of the important research topics in computer graphics and computer visions. To smoothly combine original input video clips and 3D graphic models, geometrical information of light sources and cameras used to capture a scene in the real world is essentially required. In this paper, we present a simple technique to estimate the position and orientation of an optimal light source from the topology of objects and the silhouettes of shadows appeared in the original video clips. The technique supports functions to generate well matched shadows as well as to render the inserted models by applying the estimated light sources. Shadows are known as an important visual cue that empirically indicates the relative location of objects in the 3D space. Thus our method can enhance realism in the final composed videos through the proposed shadow generation and rendering algorithms in real-time.