• Title/Summary/Keyword: depth image

Search Result 1,834, Processing Time 0.03 seconds

Simplified Integral Imaging Pickup Method for Real Objects Using a Depth Camera

  • Li, Gang;Kwon, Ki-Chul;Shin, Gwan-Ho;Jeong, Ji-Seong;Yoo, Kwan-Hee;Kim, Nam
    • Journal of the Optical Society of Korea
    • /
    • v.16 no.4
    • /
    • pp.381-385
    • /
    • 2012
  • In this paper, we present a novel integral imaging pickup method. We extract each pixel's actual depth data from a real object's surface using a depth camera, then generate elemental images based on the depth map. Since the proposed method generates elemental images without a lens array, it has simplified the pickup process and overcome some disadvantages caused by a conventional optical pickup process using a lens array. As a result, we can display a three-dimensional (3D) image in integral imaging. To show the usefulness of the proposed method, an experiment is presented. Though the pickup process has been simplified in the proposed method, the experimental results reveal that it can also display a full motion parallax image the same as the image reconstructed by the conventional method. In addition, if we improve calculation speed, it will be useful in a real-time integral imaging display system.

A Study on Super Resolution Image Reconstruction for Acquired Images from Naval Combat System using Generative Adversarial Networks (생성적 적대 신경망을 이용한 함정전투체계 획득 영상의 초고해상도 영상 복원 연구)

  • Kim, Dongyoung
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1197-1205
    • /
    • 2018
  • In this paper, we perform Single Image Super Resolution(SISR) for acquired images of EOTS or IRST from naval combat system. In order to conduct super resolution, we use Generative Adversarial Networks(GANs), which consists of a generative model to create a super-resolution image from the given low-resolution image and a discriminative model to determine whether the generated super-resolution image is qualified as a high-resolution image by adjusting various learning parameters. The learning parameters consist of a crop size of input image, the depth of sub-pixel layer, and the types of training images. Regarding evaluation method, we apply not only general image quality metrics, but feature descriptor methods. As a result, a larger crop size, a deeper sub-pixel layer, and high-resolution training images yield good performance.

New Finger-vein Recognition Method Based on Image Quality Assessment

  • Nguyen, Dat Tien;Park, Young Ho;Shin, Kwang Yong;Park, Kang Ryoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.2
    • /
    • pp.347-365
    • /
    • 2013
  • The performance of finger-vein recognition methods is limited by camera optical defocusing, the light-scattering effect of skin, and individual variations in the skin depth, density, and thickness of vascular patterns. Consequently, all of these factors may affect the image quality, but few studies have conducted quality assessments of finger-vein images. Therefore, we developed a new finger-vein recognition method based on image quality assessment. This research is novel compared with previous methods in four respects. First, the vertical cross-sectional profiles are extracted to detect the approximate positions of vein regions in a given finger-vein image. Second, the accurate positions of the vein regions are detected by checking the depth of the vein's profile using various depth thresholds. Third, the quality of the finger-vein image is measured by using the number of detected vein points in relation to the depth thresholds, which allows individual variations of vein density to be considered for quality assessment. Fourth, by assessing the quality of input finger-vein images, inferior-quality images are not used for recognition, thereby enhancing the accuracy of finger-vein recognition. Experiments confirmed that the performance of finger-vein recognition systems that incorporated the proposed quality assessment method was superior to that of previous methods.

Generation of an eye-contacted view using color and depth cameras (컬러와 깊이 카메라를 이용한 시점 일치 영상 생성 기법)

  • Hyun, Jee-Ho;Han, Jae-Young;Won, Jong-Pil;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1642-1652
    • /
    • 2012
  • Generally, a camera isn't located at the center of display in a tele-presence system and it causes an incorrect eye contact between speakers which reduce the realistic feeling during the conversation. To solve this incorrect eye contact problem, we newly propose an intermediate view reconstruction algorithm using both a color camera and a depth camera and applying for the depth image based rendering (DIBR) algorithm. In the proposed algorithm, an efficient hole filling method using the arithmetic mean value of neighbor pixels and an efficient boundary noise removal method by expanding the edge region of depth image are included. We show that the generated eye-contacted image has good quality through experiments.

Image Synthesis and Multiview Image Generation using Control of Layer-based Depth Image (레이어 기반의 깊이영상 조절을 이용한 영상 합성 및 다시점 영상 생성)

  • Seo, Young-Ho;Yang, Jung-Mo;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1704-1713
    • /
    • 2011
  • This paper proposes a method to generate multiview images which use a synthesized image consisting of layered objects. The camera system which consists of a depth camera and a RGB camera is used in capturing objects and extracts 3-dimensional information. Considering the position and distance of the synthesizing image, the objects are synthesized into a layered image. The synthesized image is spaned to multiview images by using multiview generation tools. In this paper, we synthesized two images which consist of objects and human and the multiview images which have 37 view points were generated by using the synthesized images.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

A Study of Generating Depth map for 3D Space Structure Recovery

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Eung-Kon;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1855-1862
    • /
    • 2010
  • In virtual reality, there are increasing qualitative development in service technologies for realtime interaction system development, 3- dimensional contents, 3D TV and augment reality services. These services experience difficulties to generate depth value that is essential to recover 3D space to form solidity on existing contents. Hence, research for the generation of effective depth-map using 2D is necessary. This thesis will describe a shortcoming of an existing depth-map generation for the recovery of 3D space using 2D image and will propose an enhanced depth-map generation algorithm that complements a shortcoming of existing algorithms and utilizes the definition of depth direction based on the vanishing point within image.

Extraction of Snow Cover Area and Depth Using MODIS Image for 5 River Basins South Korea (MODIS 위성영상을 이용한 국내 5대강 유역 적설분포 및 적설심 추출)

  • Hong, U-Yong;Sin, Hyeong-Jin;Kim, Seong-Jun
    • KCID journal
    • /
    • v.14 no.2
    • /
    • pp.225-235
    • /
    • 2007
  • The shape of streamflow hydrograph during the early period of spring is very much controlled by the area and depth of snow cover especially in mountainous area. When we simulate the streamfolw of a watershed snowmelt, we need some information for snow cover extent and depth distribution as parameters and input data in the hydrological models. The purpose of this study is to suggest an extraction method of snow cover area and snow depth distribution using Terra MODIS image. Snow cover extent for South Korea was extracted for the period of December 2000 and April 2006. For the snow cover area, the snow depth was interpolated using the snow depth data from 69 meteorological observation stations. With these data, it is necessary to run a hydrological model considering the snow-related data and compare the simulated streamflow with the observed data and check the applicability for the snowmelt simulation.

  • PDF

The Analysis of Amplitude and Phase Image for Acoustic Microscope Using Quadrature Technique (쿼드러춰 방식에 의한 초음파현미경의 진폭과 위상영상 분석)

  • Kim, Hyun;Jun, Kye-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.55-61
    • /
    • 1999
  • In this study, we have constructed the acoustic microscope using quadrature technique and analyzed the relative variation of image intensity and the quality of image by reconstructing the amplitude and phase image for surface defects with tiny hight variation. In this experiment, we have constructed the scanning acoustic microscope using the focused transducer with 3㎒ center frequency and the quadrature detector. And we have fabricated aluminum samples with round defects whose depth is different and reconstructed the amplitude and phase images for the samples. One sample has round defects with 2㎜ diameter and 100㎛ depth and the other has round defects with 4㎜ diameter and 5㎜ depth. In the result of line scanning for the sample with 100㎛ round defects, it has been shown that the variation rate of amplitude image intensity is 7% and the variation rate of phase image intensity is 89%. The phase image has better contrast than amplitude image for the sample. In contrast to this, the amplitude image has better contrast than phase image for the sample with 5㎜ depth's defects. Accordingly there is big difference between amplitude image and phase image for depth variation of defects whose boundary is 1 wavelength. Consequently the acoustic microscope using quadrature detector can be evaluated efficiently more than using envelope detector, for detecting defects which have height variation less than 1 wavelength. And also the phase image and the amplitude image can be used for detecting defects of tiny height variation with complimentary relation.

  • PDF

Depth-map Preprocessing Algorithm Using Two Step Boundary Detection for Boundary Noise Removal (경계 잡음 제거를 위한 2단계 경계 탐색 기반의 깊이지도 전처리 알고리즘)

  • Pak, Young-Gil;Kim, Jun-Ho;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.555-564
    • /
    • 2014
  • The boundary noise in image syntheses using DIBR consists of noisy pixels that are separated from foreground objects into background region. It is generated mainly by edge misalignment between the reference image and depth map or blurred edge in the reference image. Since hole areas are generally filled with neighboring pixels, boundary noise adjacent to the hole is the main cause of quality degradation in synthesized images. To solve this problem, a new boundary noise removal algorithm using a preprocessing of the depth map is proposed in this paper. The most common way to eliminate boundary noise caused by boundary misalignment is to modify depth map so that the boundary of the depth map can be matched to that of the reference image. Most conventional methods, however, show poor performances of boundary detection especially in blurred edge, because they are based on a simple boundary search algorithm which exploits signal gradient. In the proposed method, a two-step hierarchical approach for boundary detection is adopted which enables effective boundary detection between the transition and background regions. Experimental results show that the proposed method outperforms conventional ones subjectively and objectively.