• Title/Summary/Keyword: Depth Information

Search Result 4,406, Processing Time 0.034 seconds

Visual Fatigue Reduction Based on Depth Adjustment for DIBR System

  • Liu, Ran;Tan, Yingchun;Tian, Fengchun;Xie, Hui;Tai, Guoqin;Tan, Weimin;Liu, Junling;Xu, Xiaoyan;Kadri, Chaibou;Abakah, Naana
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1171-1187
    • /
    • 2012
  • A depth adjustment method for visual fatigue reduction for depth-image-based rendering (DIBR) system is proposed. One important aspect of the method is that no calibration parameters are needed for adjustment. By analyzing 3D image warping, the perceived depth is expressed as a function of three adjustable parameters: virtual view number, scale factor and depth value of zero parallax setting (ZPS) plane. Adjusting these three parameters according to the proposed parameter modification algorithm when performing 3D image warping can effectively change the perceived depth of stereo pairs generated in DIBR system. As the depth adjustment is performed in simple 3D image warping equations, the proposed method is facilitative for hardware implementation. Experimental results show that the proposed depth adjustment method provides an improvement in visual comfort of stereo pairs as well as generating comfortable stereoscopic images with different perceived depths that people desire.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

A Study on Create Depth Map using Focus/Defocus in single frame (단일 프레임 영상에서 초점을 이용한 깊이정보 생성에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.191-197
    • /
    • 2012
  • In this paper we present creating 3D image from 2D image by extract initial depth values calculated from focal values. The initial depth values are created by using the extracted focal information, which is calculated by the comparison of original image and Gaussian filtered image. This initial depth information is allocated to the object segments obtained from normalized cut technique. Then the depth of the objects are corrected to the average of depth values in the objects so that the single object can have the same depth. The generated depth is used to convert to 3D image using DIBR(Depth Image Based Rendering) and the generated 3D image is compared to the images generated by other techniques.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Efficient Filtering for Depth Sensors under Infrared Light Emitting Sources (적외선 방출 조명 조건 하에서 깊이 센서의 효율적인 필터링)

  • Park, Tae-Jung
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.271-278
    • /
    • 2012
  • Recently, infrared (IR)-based depth sensors have proliferated as consumer electronics thanks to decreased price, which led to various applications including gesture recognition in television virtual studios. However, the depth sensors fail to capture depth information correctly under strong light conditions emitting infrared light which are very common in television studios. This paper analyzes the mechanism of such interference between the depth sensors relying on certain IR frequencies and infrared light emitting sources, and provides methods to get correct depth information by applying filters. Also, it describes experiment methods and presents the results of applying multiple combinations of filters with different cut-off frequencies. Finally, it proves that the interference due to IR can be filtered out using proposed filtering method practically by experiment.

Reduced Reference Quality Metric for Synthesized Virtual Views in 3DTV

  • Le, Thanh Ha;Long, Vuong Tung;Duong, Dinh Trieu;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1114-1123
    • /
    • 2016
  • Multi-view video plus depth (MVD) has been widely used owing to its effectiveness in three-dimensional data representation. Using MVD, color videos with only a limited number of real viewpoints are compressed and transmitted along with captured or estimated depth videos. Because the synthesized views are generated from decoded real views, their original reference views do not exist at either the transmitter or receiver. Therefore, it is challenging to define an efficient metric to evaluate the quality of synthesized images. We propose a novel metric-the reduced-reference quality metric. First, the effects of depth distortion on the quality of synthesized images are analyzed. We then employ the high correlation between the local depth distortions and local color characteristics of the decoded depth and color images, respectively, to achieve an efficient depth quality metric for each real view. Finally, the objective quality metric of the synthesized views is obtained by combining all the depth quality metrics obtained from the decoded real views. The experimental results show that the proposed quality metric correlates very well with full reference image and video quality metrics.

Augmented Reality system Using Depth-map (Depth-Map을 이용한 객체 증강 시스템)

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Kyoung-Ok;Kim, Eung-Kon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.343-344
    • /
    • 2010
  • markerless system to a two-dimensional imaging is used to estimate the depth map as a stereo vision system uses expensive equipment. We estimate the depth map from monocular image enhancement and object extracted relative to the vanishing point is estimated depth map. Augmented objects in order to get better virtual immersion depending on the distance of the objects should be drawn in different sizes. In this paper, creating images obtained from the vanishing point, and in-depth information on the augmented object, augmented with different sizes and improved engagement of inter-object interaction.

  • PDF

Development of a Multi-view Image Generation Simulation Program Using Kinect (키넥트를 이용한 다시점 영상 생성 시뮬레이션 프로그램 개발)

  • Lee, Deok Jae;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.818-819
    • /
    • 2014
  • Recently there are many works conducted on utilizing the DIBR (Depth-Image-Based Rendering) based intermediate images for the three-dimensional displays that do not require the use of stereoscopic glasses. However the prior works have used expensive depth cameras to obtain high-resolution depth images since DIBR-based intermediate image generation method requires the accuracy for depth information. In this study, we have developed the simulation to generate multi-view intermediate images based on the depth and color images using Microsoft Kinect. This simulation aims to support the acquisition of multi-view intermediate images utilizing the low-resolution depth and color image from Kinect, and provides the integrated service for the quality evaluation of the intermediate images. This paper describes the architecture and the system implementation of this simulation program.

  • PDF

Active Focusing Technique for Extracting Depth Information (액티브 포커싱을 이용한 3차원 물체의 깊이 계측)

  • 이용수;박종훈;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.2
    • /
    • pp.40-49
    • /
    • 1992
  • In this paper,a new approach-using the linear movement of the lens location in a camera and focal distance in each location for the measurement of the depth of the 3-D object from several 2-D images-is proposed. The sharply focused edges are extracted from the images obtained by moving the lens of the camera, that is, the distance between the lens and the image plane, in the range allowed by the camera lens system. Then the depthin formation of the edges are obtained by the lens location. In our method, the accurate and complicated control system of the camera and a special algorithm for tracing the accurate focus point are not necessary, and the method has some advantage that the depth of all objects in a scene are measured by only the linear movement of the lens location of the camera. The accuracy of the extracted depth information is approximately 5% of object distances between 1 and 2m. We can see the possibility of application of the method in the depth measurement of the 3-D objects.

  • PDF

Stereoscopic Conversion of Monoscopic Video using Edge Direction Histogram

  • Kim, Jee-Hong;Kim, Dong-Wook;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.67-70
    • /
    • 2009
  • This paper proposes an algorithm for creating stereoscopic video from a monoscopic video. A viewer uses depth perception clues called a vanishing point which is the farthest from a viewer's viewpoint in order to perceive depth information from objects and surroundings thereof to the viewer. The viewer estimates the vanishing point with geometrical features in monoscopic images, and can perceive the depth information with the relationship between the position of the vanishing point and the viewer's viewpoint. In this paper, we propose a method to estimate a vanishing point with edge direction histogram in a general monoscopic image and to create a depth map depending on the position of the vanishing point. With the conversion method proposed through the experiment results, it is seen that stable stereoscopic conversion of a given monoscopic video is achieved.

  • PDF