• Title/Summary/Keyword: Light field

Search Result 2,429, Processing Time 0.036 seconds

Dictionary Learning based Superresolution on 4D Light Field Images (4차원 Light Field 영상에서 Dictionary Learning 기반 초해상도 알고리즘)

  • Lee, Seung-Jae;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.676-686
    • /
    • 2015
  • A 4D light field image is represented in traditional 2D spatial domain and additional 2D angular domain. The 4D light field has a resolution limitation both in spatial and angular domains since 4D signals are captured by 2D CMOS sensor with limited resolution. In this paper, we propose a dictionary learning-based superresolution algorithm in 4D light field domain to overcome the resolution limitation. The proposed algorithm performs dictionary learning using a large number of extracted 4D light field patches. Then, a high resolution light field image is reconstructed from a low resolution input using the learned dictionary. In this paper, we reconstruct a 4D light field image to have double resolution both in spatial and angular domains. Experimental result shows that the proposed method outperforms the traditional method for the test images captured by a commercial light field camera, i.e. Lytro.

Full-color Non-hogel-based Computer-generated Hologram from Light Field without Color Aberration

  • Min, Dabin;Min, Kyosik;Park, Jae-Hyeung
    • Current Optics and Photonics
    • /
    • v.5 no.4
    • /
    • pp.409-420
    • /
    • 2021
  • We propose a method to synthesize a color non-hogel-based computer-generated-hologram (CGH) from light field data of a three-dimensional scene with a hologram pixel pitch shared for all color channels. The non-hogel-based CGH technique generates a continuous wavefront with arbitrary carrier wave from given light field data by interpreting the ray angle in the light field to the spatial frequency of the plane wavefront. The relation between ray angle and spatial frequency is, however, dependent on the wavelength, which leads to different spatial frequency sampling grid in the light field data, resulting in color aberrations in the hologram reconstruction. The proposed method sets a hologram pixel pitch common to all color channels such that the smallest blue diffraction angle covers the field of view of the light field. Then a spatial frequency sampling grid common to all color channels is established by interpolating the light field with the spatial frequency range of the blue wavelength and the sampling interval of the red wavelength. The common hologram pixel pitch and light field spatial frequency sampling grid ensure the synthesis of a color hologram without any color aberrations in the hologram reconstructions, or any loss of information contained in the light field. The proposed method is successfully verified using color light field data of various test or natural 3D scenes.

Spatio-Angular Consistent Edit Propagation for 4D Light Field Image (4 차원 Light Field 영상에서의 일관된 각도-공간적 편집 전파)

  • Williem, Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.180-181
    • /
    • 2015
  • In this paper, we present a consistent and efficient edit propagation method that is applied for light field data. Unlike conventional sparse edit propagation, the coherency between light field sub-aperture images is fully considered by utilizing light field consistency in the optimization framework. Instead of directly solving the optimization function on all light field sub-aperture images, the proposed optimization framework performs sparse edit propagation in the extended focus image domain. The extended focus image is the representative image that contains implicit depth information and the well-focused region of all sub-aperture images. The edit results in the extended focus image are then propagated back to each light field sub-aperture image. Experimental results on test images captured by a Lytro off-the-shelf light field camera confirm that the proposed method provides robust and consistent results of edited light field sub-aperture images.

  • PDF

Improving the quality of light-field data extracted from a hologram using deep learning

  • Dae-youl Park;Joongki Park
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.165-174
    • /
    • 2024
  • We propose a method to suppress the speckle noise and blur effects of the light field extracted from a hologram using a deep-learning technique. The light field can be extracted by bandpass filtering in the hologram's frequency domain. The extracted light field has reduced spatial resolution owing to the limited passband size of the bandpass filter and the blurring that occurs when the object is far from the hologram plane and also contains speckle noise caused by the random phase distribution of the three-dimensional object surface. These limitations degrade the reconstruction quality of the hologram resynthesized using the extracted light field. In the proposed method, a deep-learning model based on a generative adversarial network is designed to suppress speckle noise and blurring, resulting in improved quality of the light field extracted from the hologram. The model is trained using pairs of original two-dimensional images and their corresponding light-field data extracted from the complex field generated by the images. Validation of the proposed method is performed using light-field data extracted from holograms of objects with single and multiple depths and mesh-based computer-generated holograms.

Learning-Based Superresolution for 4D Light Field Images (4 차원 Light Field 영상에서의 학습 기반 초해상도 알고리즘)

  • Lee, Seung-Jae;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.497-498
    • /
    • 2015
  • 영상을 취득한 후 다양한 응용프로그램으로 확장이 가능한 4 차원 light field 영상은 일반적인 2 차원 공간 (spatial) 영역과 추가적인 2 차원 각 (angular) 영역으로 구성된다. 그러나 이러한 4 차원 light field 영상을 2 차원 CMOS 센서로 취득하므로 이에 따른 해상도 제약이 존재한다. 본 논문에서는 이러한 4 차원 light field 영상이 가지는 해상도 제약 조건을 해결하기 위하여, 4 차원 light field 영상에 적합한 학습 기반 (learning-based) 초해상도 (superresolution) 알고리즘을 제안한다. 제안하는 알고리즘은 공간영역 해상도 그리고 각영역의 해상도를 각각 2 배 향상시킨다. 실험에 사용되는 영상은 상용 light field 카메라인 Lytro 에서 취득하며, 기존의 선형 보간 기법인 bicubic 기법과의 비교를 통해 제안하는 기법의 우수성을 검증한다.

  • PDF

Waveguide-type Multidirectional Light Field Display

  • Rah, Hyungju;Lee, Seungmin;Ryu, Yeong Hwa;Park, Gayeon;Song, Seok Ho
    • Current Optics and Photonics
    • /
    • v.6 no.4
    • /
    • pp.375-380
    • /
    • 2022
  • We demonstrate two types of light field displays based on waveguide grating coupler arrays: a line beam type and a point source type. Ultra violet imprinting of an array of diffractive nanograting cells on the top surface of a 50-㎛-thin slab waveguide can deliver a line beam or a point beam to a multidirectional light field out of the waveguide slab. By controlling the grating vectors of the nanograting cells, the waveguide modes are externally coupled to specific viewing angles to create a multidirectional light field display. Nanograting cells with periods of 300 nm-518 nm and slanted angles of -8.5°~+8.5° are fabricated by two-beam interference lithography on a 40 mm × 40 mm slab waveguide for seven different viewpoints. It is expected that it will be possible to realize a very thin and flexible panel that shows multidirectional light field images through the waveguide-type diffraction display.

5D Light Field Synthesis from a Monocular Video (단안 비디오로부터의 5차원 라이트필드 비디오 합성)

  • Bae, Kyuho;Ivan, Andre;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.755-764
    • /
    • 2019
  • Currently commercially available light field cameras are difficult to acquire 5D light field video since it can only acquire the still images or high price of the device. In order to solve these problems, we propose a deep learning based method for synthesizing the light field video from monocular video. To solve the problem of obtaining the light field video training data, we use UnrealCV to acquire synthetic light field data by realistic rendering of 3D graphic scene and use it for training. The proposed deep running framework synthesizes the light field video with each sub-aperture image (SAI) of $9{\times}9$ from the input monocular video. The proposed network consists of a network for predicting the appearance flow from the input image converted to the luminance image, and a network for predicting the optical flow between the adjacent light field video frames obtained from the appearance flow.

Object detection using a light field camera (라이트 필드 카메라를 사용한 객체 검출)

  • Jeong, Mingu;Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.109-111
    • /
    • 2021
  • Recently, computer vision research using light field cameras has been actively conducted. Since light field cameras have spatial information, various studies are being conducted in fields such as depth map estimation, super resolution, and 3D object detection. In this paper, we propose a method for detecting objects in blur images through a 7×7 array of images acquired through a light field camera. The blur image, which is weak in the existing camera, is detected through the light field camera. The proposed method uses the SSD algorithm to evaluate the performance using blur images acquired from light field cameras.

  • PDF

CAttNet: A Compound Attention Network for Depth Estimation of Light Field Images

  • Dingkang Hua;Qian Zhang;Wan Liao;Bin Wang;Tao Yan
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.483-497
    • /
    • 2023
  • Depth estimation is one of the most complicated and difficult problems to deal with in the light field. In this paper, a compound attention convolutional neural network (CAttNet) is proposed to extract depth maps from light field images. To make more effective use of the sub-aperture images (SAIs) of light field and reduce the redundancy in SAIs, we use a compound attention mechanism to weigh the channel and space of the feature map after extracting the primary features, so it can more efficiently select the required view and the important area within the view. We modified various layers of feature extraction to make it more efficient and useful to extract features without adding parameters. By exploring the characteristics of light field, we increased the network depth and optimized the network structure to reduce the adverse impact of this change. CAttNet can efficiently utilize different SAIs correlations and features to generate a high-quality light field depth map. The experimental results show that CAttNet has advantages in both accuracy and time.

Through-field Investigation of Stray Light for the Fore-optics of an Airborne Hyperspectral Imager

  • Cha, Jae Deok;Lee, Jun Ho;Kim, Seo Hyun;Jung, Do Hwan;Kim, Young Soo;Jeong, Yumee
    • Current Optics and Photonics
    • /
    • v.6 no.3
    • /
    • pp.313-322
    • /
    • 2022
  • Remote-sensing optical payloads, especially hyperspectral imagers, have particular issues with stray light because they often encounter high-contrast target/background conditions, such as sun glint. While developing an optical payload, we usually apply several stray-light analysis methods, including forward and backward analyses, separately or in combination, to support lens design and optomechanical design. In addition, we often characterize the stray-light response over a full field to support calibration, or when developing an algorithm to correct stray-light errors. For this purpose, we usually use forward analysis across the entire field, but this requires a tremendous amount of computational time. In this paper, we propose a sequence of forward-backward-forward analyses to more effectively investigate the through-field response of stray light, utilizing the combined advantages of the individual methods. The application is an airborne hyperspectral imager for creating hyperspectral maps from 900 to 1700 nm in a 5-nm-continuous band. With the proposed method, we have investigated the through-field response of stray light to an effective accuracy of 0.1°, while reducing computation time to 1/17th of that for a conventional, forward-only stray-light analysis.