• Title/Summary/Keyword: 2D depth map

Search Result 171, Processing Time 0.023 seconds

A Technique for Building Occupancy Maps Using Stereo Depth Information and Its Application (스테레오 깊이 정보를 이용한 점유맵 구축 기법과 응용)

  • Kim, Nak-Hyun;Oh, Se-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.1-10
    • /
    • 2008
  • An occupancy map is a representation methodology describing the region occupied by objects in 3D space, which can be utilized for autonomous navigation and object recognition. In this paper, we describe a technique for building an occupancy map using depth data extracted from stereo images. In addition, some techniques are proposed for utilizing the occupancy map for the segmentation of object regions. After the geometric information on the ground plane is extracted from a disparity image, the occupancy map is constructed by projecting each matched point to the ground plane-based 3D space. We explain techniques for extracting moving object regions using the occupancy map and present experimental results using real stereo images.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

A Study on Synthetic Techniques Utilizing Map of 3D Animation - A Case of Occlusion Properties (오클루전 맵(Occlusion Map)을 활용한 3D애니메이션 합성 기법 연구)

  • Park, Sung-Won
    • Cartoon and Animation Studies
    • /
    • s.40
    • /
    • pp.157-176
    • /
    • 2015
  • This research describes render pass synthetic techniques required to use for the effectiveness of them in 3D animation synthetic technology. As the render pass is divided by property and synthesized after rendering, elaborate, rapid synthesis can be achieved. In particular, occlusion pass creates a screen as if it had a soft, light shading, expressing a sense of depth and boundary softness. It is converted into 2D image through a process of pass rendering of animation projects created in 3D space, then completed in synthetic software. Namely, 3D animation realizes the completeness of work originally planned through compositing, a synthetic process in the last half. To complete in-depth image, a scene manufactured in 3D software can be sent as a synthetic program by rendering the scene by layer and property. As recently the occlusion pass can express depth notwithstanding conducting GI rendering of 3D graphic outputs, it is an important synthetic map not omitted in the post-production process. Nonetheless, for the importance of it, currently the occlusion pass leaves much to be desired for research support and books summarizing and analyzing the characteristics of properties, and the principles and usages of them. Hence, this research was aimed to summarize the principles and usages of occlusion map, and analyze differences in the results of synthesis. Furthermore, it also summarized a process designating renderers and the map utilizing the properties, and synthetic software usages. For the future, it is hoped that effective and diverse latter expression techniques will be studied beyond the limitation of graphic expression based on trends diversifying technique development.

View Selection Algorithm for Texturing Using Depth Maps (Depth 정보를 이용한 Texturing 의 View Selection 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1207-1210
    • /
    • 2022
  • 2D 이미지로부터 카메라의 위치 정보를 추정할 수 있는 Structure-from-Motion (SfM) 기술과 dense depth map 을 추정하는 Multi-view Stereo (MVS) 기술을 이용하여 2D 이미지에서 point cloud 와 같은 3D data 를 얻을 수 있다. 3D data 는 VR, AR, 메타버스와 같은 컨텐츠에 사용되기 위한 핵심 요소이다. Point cloud 는 보통 VR, AR, 메타버스와 같은 많은 분야에 이용되기 위해 mesh 형태로 변환된 후 texture 를 입히는 Texturing 과정이 필요하다. 기존의 Texturing 방법에서는 mesh의 face에 사용될 image의 outlier를 제거하기 위해 color 정보만을 이용했다. Color 정보를 이용하는 방법은 mesh 의 face 에 대응되는 image 의 수가 충분히 많고 움직이는 물체에 대한 outlier 에는 효과적이지만 image 의 수가 부족한 경우와 부정확한 카메라 파라미터에 대한 outlier 에는 부족한 성능을 보인다. 본 논문에서는 Texturing 과정의 view selection 에서 depth 정보를 추가로 이용하여 기존 방법의 단점을 보완할 수 있는 방법을 제안한다.

  • PDF

Bit-plane based Lossless Depth Map Coding Method (비트평면 기반 무손실 깊이정보 맵 부호화 방법)

  • Kim, Kyung-Yong;Park, Gwang-Hoon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.551-560
    • /
    • 2009
  • This paper proposes a method for efficient lossless depth map coding for MPEG 3D-Video coding. In general, the conventional video coding method such as H.264 has been used for depth map coding. However, the conventional video coding methods do not consider the image characteristics of the depth map. Therefore, as a lossless depth map coding method, this paper proposes a bit-plane based lossless depth mar coding method by using the MPEG-4 Part 2 shape coding scheme. Simulation results show that the proposed method achieves the compression ratios of 28.91:1. In intra-only coding, proposed method reduces the bitrate by 24.84% in comparison with the JPEG-LS scheme, by 39.35% in comparison with the JPEG-2000 scheme, by 30.30% in comparison with the H.264(CAVLC mode) scheme, and by 16.65% in comparison with the H.264(CABAC mode) scheme. In addition, in intra and inter coding the proposed method reduces the bitrate by 36.22% in comparison with the H.264(CAVLC mode) scheme, and by 23.71% in comparison with the 0.264(CABAC mode) scheme.

3D Stereoscopic Image Generation of a 2D Medical Image (2D 의료영상의 3차원 입체영상 생성)

  • Kim, Man-Bae;Jang, Seong-Eun;Lee, Woo-Keun;Choi, Chang-Yeol
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.723-730
    • /
    • 2010
  • Recently, diverse 3D image processing technologies have been applied in industries. Among them, stereoscopic conversion is a technology to generate a stereoscopic image from a conventional 2D image. The technology can be applied to movie and broadcasting contents and the viewer can watch 3D stereoscopic contents. Further the stereoscopic conversion is required to be applied to other fields. Following such trend, the aim of this paper is to apply the stereoscopic conversion to medical fields. The medical images can deliver more detailed 3D information with a stereoscopic image compared with a 2D plane image. This paper presents a novel methodology for converting a 2D medical image into a 3D stereoscopic image. For this, mean shift segmentation, edge detection, intensity analysis, etc are utilized to generate a final depth map. From an image and the depth map, left and right images are constructed. In the experiment, the proposed method is performed on a medical image such as CT (Computed Tomograpy). The stereoscopic image displayed on a 3D monitor shows a satisfactory performance.

3D Integral Imaging Display using Axially Recorded Multiple Images

  • Cho, Myungjin;Shin, Donghak
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.5
    • /
    • pp.410-414
    • /
    • 2013
  • In this paper, we propose a 3D display method combining a pickup process using axially recorded multiple images and an integral imaging display process. First, we extract the color and depth information of 3D objects for displaying 3D images from axially recorded multiple 2D images. Next, using the extracted depth map and color images, elemental images are computationally synthesized based on a ray mapping model between 3D space and an elemental image plane. Finally, we display 3D images optically by an integral imaging system with a lenslet array. To show the usefulness of the proposed system, we carry out optical experiments for 3D objects and present the experimental results.

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.2
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

2D/3D image Conversion Method using Simplification of Level and Reduction of Noise for Optical Flow and Information of Edge (Optical flow의 레벨 간소화 및 노이즈 제거와 에지 정보를 이용한 2D/3D 변환 기법)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.2
    • /
    • pp.827-833
    • /
    • 2012
  • In this paper, we propose an improved optical flow algorithm which reduces computational complexity as well as noise level. This algorithm reduces computational time by applying level simplification technique and removes noise by using eigenvectors of objects. Optical flow is one of the accurate algorithms used to generate depth information from two image frames using the vectors which track the motions of pixels. This technique, however, has disadvantage of taking very long computational time because of the pixel-based calculation and can cause some noise problems. The level simplifying technique is applied to reduce the computational time, and the noise is removed by applying optical flow only to the area of having eigenvector, then using the edge image to generate the depth information of background area. Three-dimensional images were created from two-dimensional images using the proposed method which generates the depth information first and then converts into three-dimensional image using the depth information and DIBR(Depth Image Based Rendering) technique. The error rate was obtained using the SSIM(Structural SIMilarity index).