• Title/Summary/Keyword: Stereoscopic Images

Search Result 313, Processing Time 0.029 seconds

From Broken Visions to Expanded Abstractions (망가진 시선으로부터 확장된 추상까지)

  • Hattler, Max
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.697-712
    • /
    • 2017
  • In recent years, film and animation for cinematic release have embraced stereoscopic vision and the three-dimensional depth it creates for the viewer. The maturation of consumer-level virtual reality (VR) technology simultaneously spurred a wave of media productions set within 3D space, ranging from computer games to pornographic videos, to Academy Award-nominated animated VR short film Pearl. All of these works rely on stereoscopic fusion through stereopsis, that is, the perception of depth produced by the brain from left and right images with the amount of binocular parallax that corresponds to our eyes. They aim to emulate normal human vision. Within more experimental practices however, a fully rendered 3D space might not always be desirable. In my own abstract animation work, I tend to favour 2D flatness and the relative obfuscation of spatial relations it affords, as this underlines the visual abstraction I am pursuing. Not being able to immediately understand what is in front and what is behind can strengthen the desired effects. In 2015, Jeffrey Shaw challenged me to create a stereoscopic work for Animamix Biennale 2015-16, which he co-curated. This prompted me to question how stereoscopy, rather than hyper-defining space within three dimensions, might itself be used to achieve a confusion of spatial perception. And in turn, how abstract and experimental moving image practices can benefit from stereoscopy to open up new visual and narrative opportunities, if used in ways that break with, or go beyond stereoscopic fusion. Noteworthy works which exemplify a range of non-traditional, expanded approaches to binocular vision will be discussed below, followed by a brief introduction of the stereoscopic animation loop III=III which I created for Animamix Biennale. The techniques employed in these works might serve as a toolkit for artists interested in exploring a more experimental, expanded engagement with stereoscopy.

Design and Implementation of High-Resolution Integral Imaging Display System using Expanded Depth Image

  • Song, Min-Ho;Lim, Byung-Muk;Ryu, Ga-A;Ha, Jong-Sung;Yoo, Kwan-Hee
    • International Journal of Contents
    • /
    • v.14 no.3
    • /
    • pp.1-6
    • /
    • 2018
  • For 3D display applications, auto-stereoscopic display methods that can provide 3D images without glasses have been actively developed. This paper is concerned with developing a display system for elemental images of real space using integral imaging. Unlike the conventional method, which reduces a color image to the level as much as a generated depth image does, we have minimized original color image data loss by generating an enlarged depth image with interpolation methods. Our method was efficiently implemented by applying a GPU parallel processing technique with OpenCL to rapidly generate a large amount of elemental image data. We also obtained experimental results for displaying higher quality integral imaging rather than one generated by previous methods.

Performance Test of 2-Dimensional PIV and 3-Dimensional PIV using Standard Images (표준화상을 이용한 2차원 PIV와 3차원 PIV계측 및 성능비교검정)

  • Doh, D.H.;Hwang, T.G.;Song, J.S.;Baek, T.S.;Pyun, Y.B.
    • Proceedings of the KSME Conference
    • /
    • 2003.11a
    • /
    • pp.646-651
    • /
    • 2003
  • Quantitative performance test on the conventional 2D-PIV and the hybrid angular 3D-PIV (Stereoscopic PIV) was carried out. LES Data sets on an impinging jet which are provided on the webpage(http://www.vsj.or.jp/piv) for the PIV Standard Project were used for the generation of virtual images. The generated virtual images were used for the 2D-PIV and 3D-PIV measurements. The measurement results showed that the results obtained by 2D-PIV on average values are closer to the LES data than those obtained by 3D-PIV, but the turbulent properties obtained by 2D-PIV are largely underestimated than those obtained by 3D-PIV.

  • PDF

Performance Test on 2-Dimensional PIV and 3-Dimensional PIV Using Standard Images (표준영상을 이용한 2차원 PIV와 3차원 PIV 성능시험)

  • Hwang, Tae-Gyu;Doh, Deog-Hee
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.28 no.11
    • /
    • pp.1315-1321
    • /
    • 2004
  • Quantitative performance test on the conventional 2D-PIV and the hybrid angular 3D-PIV(Stereoscopic PIV) was carried out. LES Data sets on an impinging jet which are provided on the webpage(http://www.vsj.or.jp/piv) for the PIV Standard Project were used for the generation of virtual images. The generated virtual images were used for the 2D-PIV and 3D-PIV measurements test. It has been shown that the results obtained by 2D-PIV on average values are slightly closer to the LES data than those obtained by 3D-PIV, but the turbulent properties obtained by 2D-PIV are largely underestimated than those obtained by 3D-PIV.

Development of a Multi-view Image Generation Simulation Program Using Kinect (키넥트를 이용한 다시점 영상 생성 시뮬레이션 프로그램 개발)

  • Lee, Deok Jae;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.818-819
    • /
    • 2014
  • Recently there are many works conducted on utilizing the DIBR (Depth-Image-Based Rendering) based intermediate images for the three-dimensional displays that do not require the use of stereoscopic glasses. However the prior works have used expensive depth cameras to obtain high-resolution depth images since DIBR-based intermediate image generation method requires the accuracy for depth information. In this study, we have developed the simulation to generate multi-view intermediate images based on the depth and color images using Microsoft Kinect. This simulation aims to support the acquisition of multi-view intermediate images utilizing the low-resolution depth and color image from Kinect, and provides the integrated service for the quality evaluation of the intermediate images. This paper describes the architecture and the system implementation of this simulation program.

  • PDF

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

Design and Implementation of Real-time three dimensional Tracking system of gazing point (삼차원 응시 위치의 실 시간 추적 시스템 구현)

  • 김재한
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2605-2608
    • /
    • 2003
  • This paper presents design and implementation methods of the real-time three dimensional tracking system of the gazing point. The proposed method is based on three dimensional data processing of eye images in the 3D world coordinates. The system hardware consists of two conventional CCD cameras for acquisition of stereoscopic image and computer for processing. And in this paper, the advantages of the proposed algorithm and test results ate described.

  • PDF

Analysis on the optimized depth of 3D displays without an accommodation error

  • Choi, Hee-Jin;Kim, Joo-Hwan;Park, Jae-Byung;Lee, Byoung-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1811-1814
    • /
    • 2007
  • Accommodation error is one of the main factors that degrade the comfort while watching stereoscopic 3D images. We analyze the limit of the expressible 3D depth without an accommodation error using the human factor information and wave optical calculation under Fresnel approximation.

  • PDF