• Title/Summary/Keyword: Light field image

Search Result 266, Processing Time 0.027 seconds

Spatio-Angular Consistent Edit Propagation for 4D Light Field Image (4 차원 Light Field 영상에서의 일관된 각도-공간적 편집 전파)

  • Williem, Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.180-181
    • /
    • 2015
  • In this paper, we present a consistent and efficient edit propagation method that is applied for light field data. Unlike conventional sparse edit propagation, the coherency between light field sub-aperture images is fully considered by utilizing light field consistency in the optimization framework. Instead of directly solving the optimization function on all light field sub-aperture images, the proposed optimization framework performs sparse edit propagation in the extended focus image domain. The extended focus image is the representative image that contains implicit depth information and the well-focused region of all sub-aperture images. The edit results in the extended focus image are then propagated back to each light field sub-aperture image. Experimental results on test images captured by a Lytro off-the-shelf light field camera confirm that the proposed method provides robust and consistent results of edited light field sub-aperture images.

  • PDF

Dictionary Learning based Superresolution on 4D Light Field Images (4차원 Light Field 영상에서 Dictionary Learning 기반 초해상도 알고리즘)

  • Lee, Seung-Jae;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.676-686
    • /
    • 2015
  • A 4D light field image is represented in traditional 2D spatial domain and additional 2D angular domain. The 4D light field has a resolution limitation both in spatial and angular domains since 4D signals are captured by 2D CMOS sensor with limited resolution. In this paper, we propose a dictionary learning-based superresolution algorithm in 4D light field domain to overcome the resolution limitation. The proposed algorithm performs dictionary learning using a large number of extracted 4D light field patches. Then, a high resolution light field image is reconstructed from a low resolution input using the learned dictionary. In this paper, we reconstruct a 4D light field image to have double resolution both in spatial and angular domains. Experimental result shows that the proposed method outperforms the traditional method for the test images captured by a commercial light field camera, i.e. Lytro.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

5D Light Field Synthesis from a Monocular Video (단안 비디오로부터의 5차원 라이트필드 비디오 합성)

  • Bae, Kyuho;Ivan, Andre;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.755-764
    • /
    • 2019
  • Currently commercially available light field cameras are difficult to acquire 5D light field video since it can only acquire the still images or high price of the device. In order to solve these problems, we propose a deep learning based method for synthesizing the light field video from monocular video. To solve the problem of obtaining the light field video training data, we use UnrealCV to acquire synthetic light field data by realistic rendering of 3D graphic scene and use it for training. The proposed deep running framework synthesizes the light field video with each sub-aperture image (SAI) of $9{\times}9$ from the input monocular video. The proposed network consists of a network for predicting the appearance flow from the input image converted to the luminance image, and a network for predicting the optical flow between the adjacent light field video frames obtained from the appearance flow.

Preprocessing for High Quality Real-time Imaging Systems by Low-light Stretch Algorithm

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.585-589
    • /
    • 2018
  • Consumer demand for high quality image/video services led to growing trend in image quality enhancement study. Therefore, recent years was a period of substantial progress in this research field. Through careful observation of the image quality after processing by image enhancement algorithms, we perceived that the dark region in the image usually suffered loss of contrast to a certain extent. In this paper, the low-light stretch preprocessing algorithm is, hence, proposed to resolve the aforementioned issue. The proposed approach is evaluated qualitatively and quantitatively against the well-known histogram equalization and Photoshop curve adjustment. The evaluation results validate the efficiency and superiority of the low-light stretch over the benchmarking methods. In addition, we also propose the 255MHz-capable hardware implementation to ease the process of incorporating low-light stretch into real-time imaging systems, such as aerial surveillance and monitoring with drones and driving aiding systems.

Use of a Prism to Compensate the Image-shifting Error of the Acousto-optic Tunable Filter (음향광학변조필터의 이미지 이동 보상을 위한 프리즘 제안)

  • Ryu, Sung-Yoon;You, Jang-Woo;Kwak, Yoon-Keun;Kim, Soo-Hyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.5
    • /
    • pp.89-95
    • /
    • 2008
  • The Acousto-Optic Tunable Filter (AOTF) is a high-speed full-field monochromator which generates two spectrally filtered light beams with ordinary and extraordinary polarization state. Thus, AOTF is widely used to build full-field spectral imaging system or spectral domain interferometer. However, AOTF has a big problem that the angle of diffracted light changes according to the scanning of wavelength, which makes image shift on CCD plane In this paper, we propose an analytic design of prism system to compensate the image shift. The detailed analysis of light paths in a prism and basic experimental results are presented to verify our proposed compensation method. The experimental results agree with simulation results based on suggested prism model and image shift is minimized at optimal condition. Also, it can be extended to compensate the image shift for ordinary and extraordinary polarized light simultaneously.

A Technique for Interpreting and Adjusting Depth Information of each Plane by Applying an Object Detection Algorithm to Multi-plane Light-field Image Converted from Hologram Image (Light-field 이미지로 변환된 다중 평면 홀로그램 영상에 대해 객체 검출 알고리즘을 적용한 평면별 객체의 깊이 정보 해석 및 조절 기법)

  • Young-Gyu Bae;Dong-Ha Shin;Seung-Yeol Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.31-41
    • /
    • 2023
  • Directly converting the focal depth and image size of computer-generated-hologram (CGH), which is obtained by calculating the interference pattern of light from the 3D image, is known to be quite difficult because of the less similarity between the CGH and the original image. This paper proposes a method for separately converting the each of focal length of the given CGH, which is composed of multi-depth images. Firstly, the proposed technique converts the 3D image reproduced from the CGH into a Light-Field (LF) image composed of a set of 2D images observed from various angles, and the positions of the moving objects for each observed views are checked using an object detection algorithm YOLOv5 (You-Only-Look-Once-version-5). After that, by adjusting the positions of objects, the depth-transformed LF image and CGH are generated. Numerical simulations and experimental results show that the proposed technique can change the focal length within a range of about 3 cm without significant loss of the image quality when applied to the image which have original depth of 10 cm, with a spatial light modulator which has a pixel size of 3.6 ㎛ and a resolution of 3840⨯2160.

Investigation of light stimulated mouse brain activation in high magnetic field fMRI using image segmentation methods

  • Kim, Wook;Woo, Sang-Keun;Kang, Joo Hyun;Lim, Sang Moo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.11-18
    • /
    • 2016
  • Magnetic resonance image (MRI) is widely used in brain research field and medical image. Especially, non-invasive brain activation acquired image technique, which is functional magnetic resonance image (fMRI) is used in brain study. In this study, we investigate brain activation occurred by LED light stimulation. For investigate of brain activation in experimental small animal, we used high magnetic field 9.4T MRI. Experimental small animal is Balb/c mouse, method of fMRI is using echo planar image (EPI). EPI method spend more less time than any other MRI method. For this reason, however, EPI data has low contrast. Due to the low contrast, image pre-processing is very hard and inaccuracy. In this study, we planned the study protocol, which is called block design in fMRI research field. The block designed has 8 LED light stimulation session and 8 rest session. All block is consist of 6 EPI images and acquired 1 slice of EPI image is 16 second. During the light session, we occurred LED light stimulation for 1 minutes 36 seconds. During the rest session, we do not occurred light stimulation and remain the light off state for 1 minutes 36 seconds. This session repeat the all over the EPI scan time, so the total spend time of EPI scan has almost 26 minutes. After acquired EPI data, we performed the analysis of this image data. In this study, we analysis of EPI data using statistical parametric map (SPM) software and performed image pre-processing such as realignment, co-registration, normalization, smoothing of EPI data. The pre-processing of fMRI data have to segmented using this software. However this method has 3 different method which is Gaussian nonparametric, warped modulate, and tissue probability map. In this study we performed the this 3 different method and compared how they can change the result of fMRI analysis results. The result of this study show that LED light stimulation was activate superior colliculus region in mouse brain. And the most higher activated value of segmentation method was using tissue probability map. this study may help to improve brain activation study using EPI and SPM analysis.

Focal Stack Based Light Field Coding for Refocusing Applications

  • Duong, Vinh Van;Canh, Thuong Nguyen;Huu, Thuc Nguyen;Jeon, Byeungwoo
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1246-1258
    • /
    • 2019
  • Since light field (LF) image has huge data volume, it requires high-performance compression technique for efficient transmission and storage of its data. Camera users may like to represent parts of image at different levels of focus at their choice anytime. To address this refocusing functionality, in this paper, we first render a focal stack consisting of multi-focus images, then compress it instead of original LF data. The proposed method has advantage of minimizing the amount of LF data to realize the targeted refocusing applications. Our experiment results show that the proposed method outperforms the state-of-the-art for LF image compression method.

Optimize KNN Algorithm for Cerebrospinal Fluid Cell Diseases

  • Soobia Saeed;Afnizanfaizal Abdullah;NZ Jhanjhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Medical imaginings assume a important part in the analysis of tumors and cerebrospinal fluid (CSF) leak. Magnetic resonance imaging (MRI) is an image segmentation technology, which shows an angular sectional perspective of the body which provides convenience to medical specialists to examine the patients. The images generated by MRI are detailed, which enable medical specialists to identify affected areas to help them diagnose disease. MRI imaging is usually a basic part of diagnostic and treatment. In this research, we propose new techniques using the 4D-MRI image segmentation process to detect the brain tumor in the skull. We identify the issues related to the quality of cerebrum disease images or CSF leakage (discover fluid inside the brain). The aim of this research is to construct a framework that can identify cancer-damaged areas to be isolated from non-tumor. We use 4D image light field segmentation, which is followed by MATLAB modeling techniques, and measure the size of brain-damaged cells deep inside CSF. Data is usually collected from the support vector machine (SVM) tool using MATLAB's included K-Nearest Neighbor (KNN) algorithm. We propose a 4D light field tool (LFT) modulation method that can be used for the light editing field application. Depending on the input of the user, an objective evaluation of each ray is evaluated using the KNN to maintain the 4D frequency (redundancy). These light fields' approaches can help increase the efficiency of device segmentation and light field composite pipeline editing, as they minimize boundary artefacts.