• 제목/요약/키워드: Elemental images

Search Result 138, Processing Time 0.027 seconds

Resolution enhancement of 3D images using computational integral imaging reconstruction method based on scale-variant magnification (크기가변 확대 기법 기반의 컴퓨터적 집적 영상 방법을 이용한 3D 영상의 해상도 개선)

  • Shin, Dong-Hak;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2271-2276
    • /
    • 2008
  • In this paper, we propose a computational integral imaging reconstruction (CIIR) method based on scale-cariant magnification technique for resolution-enhanced 3D images. First, we introduce an interference problem among elemental images in CIIR. Magnification by a large factor causes inference among elemental images when they are applied to the superposition process. Thus, the resolution of reconstructed images is limited. To overcome the interference problem, we propose a method to calculate a minimum magnification factor while CIIR is still valid. Magnification by a new factor enables the Proposed method to reconstruct resolution-enhanced images. In addition, the computational load of the proposed method is less than that of the previous method. To confirm the feasibility of the proposed method, some experiments are carried out and the results are presented.

3D Visualization for Extremely Dark Scenes Using Merging Reconstruction and Maximum Likelihood Estimation

  • Lee, Jaehoon;Cho, Myungjin;Lee, Min-Chul
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.2
    • /
    • pp.102-107
    • /
    • 2021
  • In this paper, we propose a new three-dimensional (3D) photon-counting integral imaging reconstruction method using a merging reconstruction process and maximum likelihood estimation (MLE). The conventional 3D photon-counting reconstruction method extracts photons from elemental images using a Poisson random process and estimates the scene using statistical methods such as MLE. However, it can reduce the photon levels because of an average overlapping calculation. Thus, it may not visualize 3D objects in severely low light environments. In addition, it may not generate high-quality reconstructed 3D images when the number of elemental images is insufficient. To solve these problems, we propose a new 3D photon-counting merging reconstruction method using MLE. It can visualize 3D objects without photon-level loss through a proposed overlapping calculation during the reconstruction process. We confirmed the image quality of our proposed method by performing optical experiments.

Nonlinear 3D Image Correlator Using Fast Computational Integral Imaging Reconstruction Method (고속 컴퓨터 집적 영상 복원 방법을 이용한 비선형 3D 영상 상관기)

  • Shin, Donghak;Lee, Joon-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.10
    • /
    • pp.2280-2286
    • /
    • 2012
  • In this paper, we propose a novel nonlinear 3D image correlator using a fast computational integral imaging reconstruction (CIIR) method. In order to implement the fast CIIR method, the magnification process was eliminated. In the proposed correlator, elemental images of the reference and target objects are picked up by lenslet arrays. Using these elemental images, reference and target plane images are reconstructed on the output plane by means of the proposed fast CIIR method. Then, through nonlinear cross-correlations between the reconstructed reference and the target plane images, the pattern recognition can be performed from the correlation outputs. Nonlinear correlation operation can improve the recognition of 3D objects. To show the feasibility of the proposed method, some preliminary experiments are carried out and the results are presented by comparing the conventional method.

Transformations and Their Analysis from a RGBD Image to Elemental Image Array for 3D Integral Imaging and Coding

  • Yoo, Hoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2273-2286
    • /
    • 2018
  • This paper describes transformations between elemental image arrays and a RGBD image for three-dimensional integral imaging and transmitting systems. Two transformations are introduced and analyzed in the proposed method. Normally, a RGBD image is utilized in efficient 3D data transmission although 3D imaging and display is restricted. Thus, a pixel-to-pixel mapping is required to obtain an elemental image array from a RGBD image. However, transformations and their analysis have little attention in computational integral imaging and transmission. Thus, in this paper, we introduce two different mapping methods that are called as the forward and backward mapping methods. Also, two mappings are analyzed and compared in terms of complexity and visual quality. In addition, a special condition, named as the hole-free condition in this paper, is proposed to understand the methods analytically. To verify our analysis, we carry out experiments for test images and the results indicate that the proposed methods and their analysis work in terms of the computational cost and visual quality.

Three-dimensional Display of Microscopic Specimen using Integral Imaging Microscope and Display (집적 영상 현미경과 집적 영상 디스플레이를 이용한 미세시료의 3차원 영상 재생)

  • Lim, Young-Tae;Park, Jae-Hyeung;Kwon, Ki-Chul;Kim, Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1311-1319
    • /
    • 2009
  • Microscopic specimen was captured by an integral imaging microscope and displayed as a three-dimensional image by an integral imaging display system. We applied the generalized relationship between pickup and display using two different lens arrays to our integral imaging microscope and display system. In order to display three-dimensional microscopic image, scaling of the captured elemental images is required. We analyzed the effect of the scaling coefficient in terms of the distortion of the displayed three-dimensional image and the loss of the captured elemental images. In our experiment, microscopic specimen is picked up by an integral imaging microscope having $125{\mu}m$ elemental lens pitch and displayed as three-dimensional image by an integral imaging display system having 1mm elemental lens pitch. The scaling coefficient was chosen to minimize the elemental image loss.

Three-Dimensional Automatic Target Recognition System Based on Optical Integral Imaging Reconstruction

  • Lee, Min-Chul;Inoue, Kotaro;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.14 no.1
    • /
    • pp.51-56
    • /
    • 2016
  • In this paper, we present a three-dimensional (3-D) automatic target recognition system based on optical integral imaging reconstruction. In integral imaging, elemental images of the reference and target 3-D objects are obtained through a lenslet array or a camera array. Then, reconstructed 3-D images at various reconstruction depths can be optically generated on the output plane by back-projecting these elemental images onto a display panel. 3-D automatic target recognition can be implemented using computational integral imaging reconstruction and digital nonlinear correlation filters. However, these methods require non-trivial computation time for reconstruction and recognition. Instead, we implement 3-D automatic target recognition using optical cross-correlation between the reconstructed 3-D reference and target images at the same reconstruction depth. Our method depends on an all-optical structure to realize a real-time 3-D automatic target recognition system. In addition, we use a nonlinear correlation filter to improve recognition performance. To prove our proposed method, we carry out the optical experiments and report recognition results.

Defocusing image generation corresponding to focusing plane by using spatial information of 3D objects (3차원 물체의 공간정보를 이용한 임의의 집속면에 대응하는 디포커싱 영상 구현)

  • Jang, Jae-Young;Kim, Young-Il;Shin, Donghak;Lee, Byung-Gook;Lee, Joon-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.981-988
    • /
    • 2013
  • In this paper, we propose a method to generate defocusing images at the focusing plane using the 3D spatial information of object through pickup process of integral imaging technique. In the proposed method, the focusing and defocusing images are generated by the convolution operation between elemental images and ${\delta}$ function array. We observed the image difference by defocusing degree according to the distance of focusing plane. To show the feasibility of the proposed method, some preliminary experiments are carried out and the results are presented.

Neighboring Elemental Image Exemplar Based Inpainting for Computational Integral Imaging Reconstruction with Partial Occlusion

  • Ko, Bumseok;Lee, Byung-Gook;Lee, Sukho
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.390-396
    • /
    • 2015
  • We propose a partial occlusion removal method for computational integral imaging reconstruction (CIIR) based on the usage of the exemplar based inpainting technique. The proposed method is an improved version of the original linear inpainting based CIIR (LI-CIIR), which uses the inpainting technique to fill in the data missing region. The LI-CIIR shows good results for images which contain objects with smooth surfaces. However, if the object has a textured surface, the result of the LI-CIIR deteriorates, since the linear inpainting cannot recover the textured data in the data missing region well. In this work, we utilize the exemplar based inpainting to fill in the textured data in the data missing region. We call the proposed method the neighboring elemental image exemplar based inpainting (NEI-exemplar inpainting) method, since it uses sources from neighboring elemental images to fill in the data missing region. Furthermore, we also propose an automatic occluding region extraction method based on the use of the mutual constraint using depth estimation (MC-DE) and the level set based bimodal segmentation. Experimental results show the validity of the proposed system.

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.