• Title/Summary/Keyword: 2D 영상

Search Result 3,274, Processing Time 0.032 seconds

3D Conversion of 2D H.264 Video (2D H.264 동영상의 3D 입체 변환)

  • Hong, Ho-Ki;Baek, Yun-Ki;Lee, Seung-Hyun;Kim, Dong-Wook;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1208-1215
    • /
    • 2006
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using the conventional stereo-camera process. Motion information of each frame can be obtained by the given motion vectors in most of videos encoded by MPEG standards. Especially, we have accurate motion vectors for H.264 streams because of the availability of a variety of block sizes. 2D/3D video conversion algorithm proposed in this paper can create the left and right images that correspond to the original image by using cut detection method, delay factors, motion types, and image types. We usually have consistent motion type na direction in a given cut because each frame in the same cut has high correlation. We show the improved performance of the proposed algorithm through experimental results.

Geometric analysis and anti-aliasing filter for stereoscopic 3D image scaling (스테레오 3D 영상 스케일링에 대한 기하학적 분석 및 anti-aliasing 필터)

  • Kim, Wook-Joong;Hur, Nam-Ho;Kim, Jin-Woong
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.638-649
    • /
    • 2009
  • Image resizing (or scaling) is one of the most essential issues for the success of visual service because image data has to be adapted to the variety of display features. For 2D imaging, the image scaling is generally accomplished by 2D image re-sampling (i.e., up-/down-sampling). However, when it comes to stereoscopic 3D images, 2D re-sampling methods are inadequate because additional consideration on the third dimension of depth is not incorporated. Practically, stereoscopic 3D image scaling is process with left/right images, not stereoscopic 3D image itself, because the left/right Images are only tangible data. In this paper, we analyze stereoscopic 3D image scaling from two aspects: geometrical deformation and frequency-domain aliasing. A number of 3D displays are available in the market and they have various screen dimensions. As we have more varieties of the displays, efficient stereoscopic 3D image scaling is becoming more emphasized. We present the recommendations for the 3D scaling from the geometric analysis and propose a disparity-adaptive filter for anti-aliasing which could occur during the image scaling process.

Usefulness of Three-Dimensional Maximal Intensity Projection (MIP) Reconstruction Image in Breast MRI (유방자기공명영상에서 3 차원 최대 강도 투사 재건 영상의 유용성)

  • Kim, Hyun-Sung;Kang, Bong-Joo;Kim, Sung-Hun;Choi, Jae-Jeong;Lee, Ji-Hye
    • Investigative Magnetic Resonance Imaging
    • /
    • v.13 no.2
    • /
    • pp.183-189
    • /
    • 2009
  • Purpose : To evaluate the usefulness of three-dimensional (3D) maximal intensity projection (MIP) reconstruction method in breast MRI. Materials and Methods : Total 54 breasts of consecutive 27 patients were examined by breast MRI. Breast MRI was performed using GE Signa Excite Twin speed (GE medical system, Wisconsin, USA) 1.5T. We obtained routine breast MR images including axial T2WI, T1WI, sagittal T1FS, dynamic contrast-enhanced T1FS, and subtraction images. 3D MIP reconstruction images were obtained as follows; subtraction images were obtained using TIPS and early stage of contrast-enhanced TIPS images. And then 3D MIP images were obtained using the subtraction images through advantage workstation (GE Medical system). We detected and analyzed the lesions in the 3D MIP and routine MRI images according to ACR $BIRADS^{(R)}$ MRI lexicon. And then we compared the findings of 3D MIP and those of routine breast MR images and evaluated whether 3D MIP had additional information comparing to routine MR images. Results : 3D MIP images detect the 43 of 56 masses found on routine MR images (76.8%). In non-mass like enhancement, 3D MIP detected 17 of 20 lesions (85 %). And there were one hundred sixty nine foci at 3D MIP images and one hundred nine foci at routine MR images. 3D MIP images detected 14 of 23 category 3 lesions (60.9%), 11 of 16 category 4 lesions (68.87%), 28 of 28 Category 5 lesions (100%). In analyzing the enhancing lesions at 3D MIP images, assessment categories of the lesions were correlated as the results at routine MR images (p-value < 0.0001). 3D MIP detected additional two daughter nodules that were descriped foci at routine MR images and additional one nodule that was not detected at routine MR images. Conclusion : 3D MIP image has some limitations but is useful as additional image of routine breast MR Images.

  • PDF

Method for Applying Wavefront Parallel Processing on Cubemap Video (큐브맵 영상에 Wavefront 병렬 처리를 적용하는 방법)

  • Hong, Seok Jong;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.401-404
    • /
    • 2017
  • The 360 VR video has a format of a stereoscopic shape such as an isometric shape or a cubic shape or a cubic shape. Although these formats have different characteristics, they have in common that the resolution is higher than that of a normal 2D video. Therefore, it takes much longer time to perform coding/decoding on 360 VR video than 2D Video, so parallel processing techniques are essential when it comes to coding 360 VR video. HEVC, the state of art 2D video codec, uses Wavefront Parallel Processing (WPP) technology as a standard for parallelization. This technique is optimized for 2D videos and does not show optimal performance when used in 3D videos. Therefore, a suitable method for WPP is required for 3D video. In this paper, we propose WPP coding/decoding method which improves WPP performance on cube map format 3D video. The experiment was applied to the HEVC reference software HM 12.0. The experimental results show that there is no significant loss of PSNR compared with the existing WPP, and the coding complexity of 15% to 20% is further reduced. The proposed method is expected to be included in the future 3D VR video codecs.

Depth Image Interpolation using Fusion of color and depth Information (고품질의 고해상도 깊이 영상을 위한 컬러 영상과 깊이 영상을 결합한 깊이 영상 보간법)

  • Kim, Ji-Hyun;Choi, Jin-Wook;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.8-10
    • /
    • 2011
  • 3D 콘텐츠를 획득하는 여러 가지 방법 중 2D-plus-Depth 구조는 다시점 영상을 얻을 수 있는 장점 때문에 최근 이에 관한 연구가 활발히 진행되고 있다. 이 구조를 통해서 고품질의 3D영상을 얻기 위해서는 무엇보다 고품질의 깊이 영상을 구현하는 것이 중요하다. 깊이 영상을 얻기 위해서 Time-of-Flight(ToF)방식의 깊이 센서가 활용되고 있는데 이 깊이 센서는 실시간으로 깊이 정보를 획득할 수 있지만 낮은 해상도와 노이즈가 발생한다는 단점이 있다. 따라서 깊이 영상의 특성을 보존하는 상환 변환을 하여야지만 고품질의 3D 콘텐츠를 제작할 수 있다. 주로 깊이 영상의 해상도를 높이기 위해서 Joint Bilateral Upsampling(JBU) 방식이 사용되고 있다. 하지만 이 방식은 4배 이상의 고 해상도 깊이 영상을 획득하는 데에는 적합하지 않다. 따라서 고해상도의 깊이 영상을 얻기 위해서 보간법을 수행하여 가이드 영상을 만든 후 Bilateral Filtering(BF)을 처리함으로써 영상의 품질을 향상시킨다. 본 논문에서는 2D-plus-Depth 구조에서 얻은 컬러 영상과 깊이 영상을 결합한 보간법을 통해서 깊이 영상의 특성을 살린 가이드 영상을 구현하는 방법을 제안한다. 실험 결과는 제안 방법이 기존 보간법보다 경계 영역 및 평활한 영역에서 깊이 영상의 특성을 잘 보존하는 것을 보여준다.

  • PDF

Image Coding Using DCT Map and Binary Tree-structured Vector Quantizer (DCT 맵과 이진 트리 구조 벡터 양자화기를 이용한 영상 부호화)

  • Jo, Seong-Hwan;Kim, Eung-Seong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.81-91
    • /
    • 1994
  • A DCT map and new cldebook design algorithm based on a two-dimension discrete cosine transform (2D-DCT) is presented for coder of image vector quantizer. We divide the image into smaller subblocks, then, using 2D DCT, separate it into blocks which are hard to code but it bears most of the visual information and easy to code but little visual information, and DCT map is made. According to this map, the significant features of training image are extracted by using the 2D DCT. A codebook is generated by partitioning the training set into a binary tree based on tree-structure. Each training vector at a nonterminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. Compared with the pairwise neighbor (PPN) and classified VQ(CVQ) algorithm, about 'Lenna' and 'Boat' image, the new algorithm results in a reduction in computation time and shows better picture quality with 0.45 dB and 0.33dB differences as to PNN, 0.05dB and 0.1dB differences as to CVQ respectively.

  • PDF

A 3D Face Reconstruction and Tracking Method using the Estimated Depth Information (얼굴 깊이 추정을 이용한 3차원 얼굴 생성 및 추적 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.21-28
    • /
    • 2011
  • A 3D face shape derived from 2D images may be useful in many applications, such as face recognition, face synthesis and human computer interaction. To do this, we develop a fast 3D Active Appearance Model (3D-AAM) method using depth estimation. The training images include specific 3D face poses which are extremely different from one another. The landmark's depth information of landmarks is estimated from the training image sequence by using the approximated Jacobian matrix. It is added at the test phase to deal with the 3D pose variations of the input face. Our experimental results show that the proposed method can efficiently fit the face shape, including the variations of facial expressions and 3D pose variations, better than the typical AAM, and can estimate accurate 3D face shape from images.

3D Image Conversion of 2D Still Image based-on Differential Area-Moving Scheme (차등적 영역 이동기법을 이용한 2차원 정지영상의 3차원 입체영상 변환)

  • 이종호;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.11A
    • /
    • pp.1938-1945
    • /
    • 2001
  • In this paper, a new scheme for image conversion of the 2D input images into the stereoscopic 3D images by using differential shifting method is proposed. First, the relative depth information is estimated by disparity and occlusion information from the input stereo images and then, each of image objects are segmented by gray-level using the estimated information. Finally, through the differential shifting of the segmented objects according to the horizontal parallax, a stereoscopic 3D image having optimal stereopsis is reconstructed. From some experimental results, it is found that the horizontal disparity can be improved about 1.6dB in PSNR for the reconstructed stereo image using the proposed scheme through comparing to that of the given input image. In the experiment of using the commercial stereo viewer, the reconstructed stereoscopic 3D images, in which each of the segmented objects are horizontally shifted in the range of 4 ∼5 pixels are also found to have the mast improved stereopsis.

  • PDF

Exploring the Immersion Degree Difference Between 3D and 2D: Focus on Action-Adventure Game (2D영상과 3D 입체영상에서의 액션 어드벤처 게임 몰입도 비교)

  • Kwon, Hyeog-In;Rhee, Hyun-Jung;Park, Jin-Wan
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.157-164
    • /
    • 2011
  • Since the movie "Avatar" made world-widely a big success, people's interest to 3D stereoscopic vision has been increasing explosively. However, it is hard to predict that for how long this tremendous attention to 3D stereoscopic would last; consumers have accumulated experience and predominant consciousness from social and cultural environmental various factors. This paper, we will try to see how people interact with 3D stereo through the empirical study. Using Jannett (2009)'s immersion questionnaire, we will measure how different people get immersed while playing game in 3D stereoscopic and 2D.

Performance Improvement for 2-D Scattering Center Extraction and ISAR Image Formation for a Target in Radar Target Recognition (레이다 표적 인식에서 표적에 대한 2차원 산란점 추출 및 ISAR 영상 형성에 대한 성능 개선)

  • Shin, Seung-Yong;Lim, Ho;Myung, Noh-Hoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.8
    • /
    • pp.984-996
    • /
    • 2007
  • This paper presents techniques of 2-D scattering center extraction and 2-B ISAR(Inverse SAR) image formation for scattering wave which is scattered by a target. In general, 2-D IFFT is widely used to obtain 2-D scattering center and ISAR image of targets. But, this method has drawbacks, that is poor in a resolution aspect. To overcome these shortcomings with the FT(Fourier Transform)-based method, various techniques of high resolution signal processing were developed. In this paper, algorithms of 2-D scattering center extraction and ISAR image formation such as 2-D MEMP(Matrix Enhancement and Matrix Pencil), 2-D ESPRIT(Estimation of Signal Parameter via Rotational Invariance Techniques) are described. In order to show the performances of each algorithm, we use scattering wave of the ideal point scatterers and F-18 aircraft to estimate 2-D scattering center and abtain 2-D ISAR image.