• Title/Summary/Keyword: Depth/Color Information

Search Result 246, Processing Time 0.023 seconds

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Low Resolution Depth Interpolation using High Resolution Color Image (고해상도 색상 영상을 이용한 저해상도 깊이 영상 보간법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.60-65
    • /
    • 2013
  • In this paper, we propose a high-resolution disparity map generation method using a low-resolution time-of-flight (TOF) depth camera and color camera. The TOF depth camera is efficient since it measures the range information of objects using the infra-red (IR) signal in real-time. It also quantizes the range information and provides the depth image. However, there are some problems of the TOF depth camera, such as noise and lens distortion. Moreover, the output resolution of the TOF depth camera is too small for 3D applications. Therefore, it is essential to not only reduce the noise and distortion but also enlarge the output resolution of the TOF depth image. Our proposed method generates a depth map for a color image using the TOF camera and the color camera simultaneously. We warp the depth value at each pixel to the color image position. The color image is segmented using the mean-shift segmentation method. We define a cost function that consists of color values and segmented color values. We apply a weighted average filter whose weighting factor is defined by the random walk probability using the defined cost function of the block. Experimental results show that the proposed method generates the depth map efficiently and we can reconstruct good virtual view images.

  • PDF

Hole-Filling Methods Using Depth and Color Information for Generating Multiview Images

  • Nam, Seung-Woo;Jang, Kyung-Ho;Ban, Yun-Ji;Kim, Hye-Sun;Chien, Sung-Il
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.996-1007
    • /
    • 2016
  • This paper presents new hole-filling methods for generating multiview images by using depth image based rendering (DIBR). Holes appear in a depth image captured from 3D sensors and in the multiview images rendered by DIBR. The holes are often found around the background regions of the images because the background is prone to occlusions by the foreground objects. Background-oriented priority and gradient-oriented priority are also introduced to find the order of hole-filling after the DIBR process. In addition, to obtain a sample to fill the hole region, we propose the fusing of depth and color information to obtain a weighted sum of two patches for the depth (or rendered depth) images and a new distance measure to find the best-matched patch for the rendered color images. The conventional method produces jagged edges and a blurry phenomenon in the final results, whereas the proposed method can minimize them, which is quite important for high fidelity in stereo imaging. The experimental results show that, by reducing these errors, the proposed methods can significantly improve the hole-filling quality in the multiview images generated.

Color-Depth Combined Semantic Image Segmentation Method (색상과 깊이정보를 융합한 의미론적 영상 분할 방법)

  • Kim, Man-Joung;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.687-696
    • /
    • 2014
  • This paper presents a semantic object extraction method using user's stroke input, color, and depth information. It is supposed that a semantically meaningful object is surrounded with a few strokes from a user, and has similar depths all over the object. In the proposed method, deciding the region of interest (ROI) is based on the stroke input, and the semantically meaningful object is extracted by using color and depth information. Specifically, the proposed method consists of two steps. The first step is over-segmentation inside the ROI using color and depth information. The second step is semantically meaningful object extraction where over-segmented regions are classified into the object region and the background region according to the depth of each region. In the over-segmentation step, we propose a new marker extraction method where there are two propositions, i.e. an adaptive thresholding scheme to maximize the number of the segmented regions and an adaptive weighting scheme for color and depth components in computation of the morphological gradients that is required in the marker extraction. In the semantically meaningful object extraction, we classify over-segmented regions into the object region and the background region in order of the boundary regions to the inner regions, the average depth of each region being compared to the average depth of all regions classified into the object region. In experimental results, we demonstrate that the proposed method yields reasonable object extraction results.

GPGPU based Depth Image Enhancement Algorithm (GPGPU 기반의 깊이 영상 화질 개선 기법)

  • Han, Jae-Young;Ko, Jin-Woong;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2927-2936
    • /
    • 2013
  • In this paper, we propose a noise reduction and hole removal algorithm in order to improve the quality of depth images when they are used for creating 3D contents. In the proposed algorithm, the depth image and the corresponding color image are both used. First, an intensity image is generated by converting the RGB color space into the HSI color space. By estimating the difference of distance and depth between reference and neighbor pixels from the depth image and difference of intensity values from the color image, they are used to remove noise in the proposed algorithm. Then, the proposed hole filling method fills the detected holes with the difference of euclidean distance and intensity values between reference and neighbor pixels from the color image. Finally, we apply a parallel structure of GPGPU to the proposed algorithm to speed-up its processing time for real-time applications. The experimental results show that the proposed algorithm performs better than other conventional algorithms. Especially, the proposed algorithm is more effective in reducing edge blurring effect and removing noise and holes.

A Color Navigation System for Effective Perceived Structure: Focused on Hierarchical Menu Structure in Small Display (지각된 정보구조의 효과적 형성을 위한 색공간 네비게이션 시스템 연구 - 작은 디스플레이 화면상의 위계적 정보구조를 중심으로 -)

  • 경소영;박경욱;박준아;김진우
    • Archives of design research
    • /
    • v.15 no.3
    • /
    • pp.167-180
    • /
    • 2002
  • This study investigates effective ways to help users form a correct mental model of the hierarchical information space (HIS) in small display. The focus is the effect of color cue on understanding the structure and navigating the information space. The concept of color space (CS) corresponds well to the HIS - one color has a unique position in the CS as a piece of information does in HIS. In this study, we empirically examined two types of color cue, namely, categorical and depth cue. Hue was used as a categorical cue and tone was used as a depth cue. In our experiment, we evaluate the effectiveness of the color cues in the mobile internet system. Subjects were asked to perform four searching tasks and four comparison tasks. The results of experiment reveal that the categorical cues significantly improve the user's mental model whereas decrease navigation performances. The depth cues cannot aid in understanding the HIS as well as improve navigation performances. This study concludes with limitations of the study and descriptions of future studies.

  • PDF

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

Low-Resolution Depth Map Upsampling Method Using Depth-Discontinuity Information (깊이 불연속 정보를 이용한 저해상도 깊이 영상의 업샘플링 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.875-880
    • /
    • 2013
  • When we generate 3D video that provides immersive and realistic feeling to users, depth information of the scene is essential. Since the resolution of the depth map captured by a depth sensor is lower than of the color image, we need to upsample the low-resolution depth map for high-resolution 3D video generation. In this paper, we propose a depth upsampling method using depth-discontinuity information. Using the high-resolution color image and the low-resolution depth map, we detect depth-discontinuity regions. Then, we define an energy function for the depth map upsampling and optimize it using the belief propagation method. Experimental results show that the proposed method outperforms other depth upsampling methods in terms of the bad pixel rate.

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

Development of a Multi-view Image Generation Simulation Program Using Kinect (키넥트를 이용한 다시점 영상 생성 시뮬레이션 프로그램 개발)

  • Lee, Deok Jae;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.818-819
    • /
    • 2014
  • Recently there are many works conducted on utilizing the DIBR (Depth-Image-Based Rendering) based intermediate images for the three-dimensional displays that do not require the use of stereoscopic glasses. However the prior works have used expensive depth cameras to obtain high-resolution depth images since DIBR-based intermediate image generation method requires the accuracy for depth information. In this study, we have developed the simulation to generate multi-view intermediate images based on the depth and color images using Microsoft Kinect. This simulation aims to support the acquisition of multi-view intermediate images utilizing the low-resolution depth and color image from Kinect, and provides the integrated service for the quality evaluation of the intermediate images. This paper describes the architecture and the system implementation of this simulation program.

  • PDF