• Title/Summary/Keyword: depth image-based

Search Result 822, Processing Time 0.031 seconds

Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing

  • Lee, Min-Chul;Inoue, Kotaro;Konishi, Naoki;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.4
    • /
    • pp.275-279
    • /
    • 2015
  • There are several methods to record three dimensional (3D) information of objects such as lens array based integral imaging, synthetic aperture integral imaging (SAII), computer synthesized integral imaging (CSII), axially distributed image sensing (ADS), and axially distributed stereo image sensing (ADSS). ADSS method is capable of recording partially occluded 3D objects and reconstructing high-resolution slice plane images. In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental stereo image pairs are recorded by simply moving the stereo camera along the optical axis and the recorded elemental image pairs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement and simple block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

Creating Architectural Scenes from Photographs Using Model-based Stereo arid Image Subregioning

  • Aphiboon, Jitti;Papasratorn, Borworn
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1666-1669
    • /
    • 2002
  • In the process of creating architectural scenes from photographs using Model-based Stereo 〔1〕, the geometric model is used as prior information to solve correspondence problems and recover the depth or disparity of real scenes. This paper presents an Image Subregioning algorithm that divides left and right images into several rectangular sub-images. The division is done according to the estimated depth of real scenes using a Heuristic Approach. The depth difference between the reality and the model can be partitioned into each depth level. This reduces disparity search range in the Similarity Function. For architectural scenes with complex depth, experiments using the above approach show that accurate disparity maps and better results when rendering scenes can be achieved by the proposed algorithm.

  • PDF

GPGPU based Depth Image Enhancement Algorithm (GPGPU 기반의 깊이 영상 화질 개선 기법)

  • Han, Jae-Young;Ko, Jin-Woong;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2927-2936
    • /
    • 2013
  • In this paper, we propose a noise reduction and hole removal algorithm in order to improve the quality of depth images when they are used for creating 3D contents. In the proposed algorithm, the depth image and the corresponding color image are both used. First, an intensity image is generated by converting the RGB color space into the HSI color space. By estimating the difference of distance and depth between reference and neighbor pixels from the depth image and difference of intensity values from the color image, they are used to remove noise in the proposed algorithm. Then, the proposed hole filling method fills the detected holes with the difference of euclidean distance and intensity values between reference and neighbor pixels from the color image. Finally, we apply a parallel structure of GPGPU to the proposed algorithm to speed-up its processing time for real-time applications. The experimental results show that the proposed algorithm performs better than other conventional algorithms. Especially, the proposed algorithm is more effective in reducing edge blurring effect and removing noise and holes.

Depth Image Distortion Correction Method according to the Position and Angle of Depth Sensor and Its Hardware Implementation (거리 측정 센서의 위치와 각도에 따른 깊이 영상 왜곡 보정 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Cho, Hosang;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1103-1109
    • /
    • 2014
  • The motion recognition system has been broadly studied in digital image and video processing fields. Recently, method using th depth image is used very useful. However, recognition accuracy of depth image based method will be loss caused by size and shape of object distorted for angle of the depth sensor. Therefore, distortion correction of depth sensor is positively necessary for distinguished performance of the recognition system. In this paper, we propose a pre-processing algorithm to improve the motion recognition system. Depth data from depth sensor converted to real world, performed the corrected angle, and then inverse converted to projective world. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

FPGA Implementation of Differential CORDIC-based high-speed phase calculator for 3D Depth Image Extraction (3차원 Depth Image 추출용 Differential CORDIC 기반 고속 위상 연산기의 FPGA 구현)

  • Koo, Jung-youn;Shin, Kyung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.350-353
    • /
    • 2013
  • In this paper, a hardware implementation of phase calculator for extracting 3D depth image from TOF(Time-Of-Flight) sensor is proposed. The designed phase calculator, which adopts redundant binary number systems and a pipelined architecture to improve throughput and speed, performs arctangent operation using vectoring mode of DCORDIC algorithm. Fixed-point MATLAB simulations are carried out to determine the optimized bit-widths and number of iteration. The designed phase calculator is verified by emulating the restoration of virtual 3D data using MATLAB/Simulink and FPGA-in-the-loop verification, and the estimated performance is about 7.5 Gbps at 469 MHz clock frequency.

  • PDF

Depth-map Preprocessing Algorithm Using Two Step Boundary Detection for Boundary Noise Removal (경계 잡음 제거를 위한 2단계 경계 탐색 기반의 깊이지도 전처리 알고리즘)

  • Pak, Young-Gil;Kim, Jun-Ho;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.555-564
    • /
    • 2014
  • The boundary noise in image syntheses using DIBR consists of noisy pixels that are separated from foreground objects into background region. It is generated mainly by edge misalignment between the reference image and depth map or blurred edge in the reference image. Since hole areas are generally filled with neighboring pixels, boundary noise adjacent to the hole is the main cause of quality degradation in synthesized images. To solve this problem, a new boundary noise removal algorithm using a preprocessing of the depth map is proposed in this paper. The most common way to eliminate boundary noise caused by boundary misalignment is to modify depth map so that the boundary of the depth map can be matched to that of the reference image. Most conventional methods, however, show poor performances of boundary detection especially in blurred edge, because they are based on a simple boundary search algorithm which exploits signal gradient. In the proposed method, a two-step hierarchical approach for boundary detection is adopted which enables effective boundary detection between the transition and background regions. Experimental results show that the proposed method outperforms conventional ones subjectively and objectively.

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

Genetic Algorithm Based Feature Reduction For Depth Estimation Of Image (이미지의 깊이 추정을 위한 유전 알고리즘 기반의 특징 축소)

  • Shin, Sung-Sik;Gwun, Ou-Bong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.2
    • /
    • pp.47-54
    • /
    • 2011
  • This paper describes the method to reduce the time-cost for depth estimation of an image by learning, on the basis of the Genetic Algorithm, the image's features. The depth information is estimated from the relationship among features such as the energy value of an image and the gradient of the texture etc. The estimation-time increases due to the large dimension of an image's features used in the estimating process. And the use of the features without consideration of their importance can have an adverse effect on the performance. So, it is necessary to reduce the dimension of an image's features based on the significance of each feature. Evaluation of the method proposed in this paper using benchmark data provided by Stanford University found that the time-cost for feature extraction and depth estimation improved by 60% and the accuracy was increased by 0.4% on average and up to 2.5%.

Depth Image-Based Human Action Recognition Using Convolution Neural Network and Spatio-Temporal Templates (시공간 템플릿과 컨볼루션 신경망을 사용한 깊이 영상 기반의 사람 행동 인식)

  • Eum, Hyukmin;Yoon, Changyong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.10
    • /
    • pp.1731-1737
    • /
    • 2016
  • In this paper, a method is proposed to recognize human actions as nonverbal expression; the proposed method is composed of two steps which are action representation and action recognition. First, MHI(Motion History Image) is used in the action representation step. This method includes segmentation based on depth information and generates spatio-temporal templates to describe actions. Second, CNN(Convolution Neural Network) which includes feature extraction and classification is employed in the action recognition step. It extracts convolution feature vectors and then uses a classifier to recognize actions. The recognition performance of the proposed method is demonstrated by comparing other action recognition methods in experimental results.