• Title/Summary/Keyword: 3D Image Map

Search Result 457, Processing Time 0.036 seconds

Application Study on the View Points Analysis for National Roads Route using Digital Elevation Data

  • Yeon, Sang-Ho;Hong, Ill-Hwa
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.292-296
    • /
    • 2002
  • This study has been accomplished as a experimental study for field application of 3D Perspective Image Map creation using Digital Topographical Map and based on the Ortho-Projection Image which is generated from Satellite Overlay Images and the precise Relative Coordinates of longitude, latitude and altitude which is corrected by GCP(Ground Control Point). AS to Contour Lines Map which is created by Coordinate conversion of 1:5,000 Topographical Map, we firstly made Satellite Image Map to substitute for Digital Topographical Map through overlapping the original images on top of each Ortho-Projection Image created and checking the accuracy. In addition to 3D Image Map creation for 3D Terrain analysis of a target district, Slope Gradient Analysis, Aspect Analysis and Terrain Elevation Model generation, multidirectional 3D Image generation by DEM can be carried out through this study. This study is to develop a mapping technology with which we can generate 3D Satellite Images of a target district through the composition of Digital Maps and Facility Blueprint and arbitrarily create 3D Perspective Images of the target district from any view point.

  • PDF

A Study on 2D/3D image Conversion Method using Create Depth Map (2D/3D 변환을 위한 깊이정보 생성기법에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.4
    • /
    • pp.1897-1903
    • /
    • 2011
  • This paper discusses a 2D/3D conversion of images using technologies like object extraction and depth-map creation. The general procedure for converting 2D images into a 3D image is extracting objects from 2D image, recognizing the distance of each points, generating the 3D image and correcting the image to generate with less noise. This paper proposes modified new methods creating a depth-map from 2D image and recognizing the distance of objects in it. Depth-map information which determines the distance of objects is the key data creating a 3D image from 2D images. To get more accurate depth-map data, noise filtering is applied to the optical flow. With the proposed method, better depth-map information is calculated and better 3D image is constructed.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

2D/3D conversion method using depth map based on haze and relative height cue (실안개와 상대적 높이 단서 기반의 깊이 지도를 이용한 2D/3D 변환 기법)

  • Han, Sung-Ho;Kim, Yo-Sup;Lee, Jong-Yong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.351-356
    • /
    • 2012
  • This paper presents the 2D/3D conversion technique using depth map which is generated based on the haze and relative height cue. In cases that only the conventional haze information is used, errors in image without haze could be generated. To reduce this kind of errors, a new approach is proposed combining the haze information with depth map which is constructed based on the relative height cue. Also the gray scale image from Mean Shift Segmentation is combined with depth map of haze information to sharpen the object's contour lines, upgrading the quality of 3D image. Left and right view images are generated by DIBR(Depth Image Based Rendering) using input image and final depth map. The left and right images are used to generate red-cyan 3D image and the result is verified by measuring PSNR between the depth maps.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

3D Map Generation System for Indoor Autonomous Navigation (실내 자율 주행을 위한 3D Map 생성 시스템)

  • Moon, SungTae;Han, Sang-Hyuck;Eom, Wesub;Kim, Youn-Kyu
    • Aerospace Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.140-148
    • /
    • 2012
  • For autonomous navigation, map, pose tracking, and finding the shortest path are required. Because there is no GPS signal in indoor environment, the current position should be recognized in the 3D map by using image processing or something. In this paper, we explain 3D map creation technology by using depth camera like Kinect and pose tracking in 3D map by using 2D image taking from camera. In addition, the mechanism of avoiding obstacles is discussed.

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

1D FN-MLCA and 3D Chaotic Cat Map Based Color Image Encryption (1차원 FN-MLCA와 3차원 카오틱 캣 맵 기반의 컬러 이미지 암호화)

  • Choi, Un Sook
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.406-415
    • /
    • 2021
  • The worldwide spread of the Internet and the digital information revolution have resulted in a rapid increase in the use and transmission of multimedia information due to the rapid development of communication technologies. It is important to protect images in order to prevent problems such as piracy and illegal distribution. To solve this problem, I propose a new digital color image encryption algorithm in this paper. I design a new pseudo-random number generator based on 1D five-neighborhood maximum length cellular automata (FN-MLCA) to change the pixel values of the plain image into unpredictable values. And then I use a 3D chaotic cat map to effectively shuffle the positions of the image pixel. In this paper, I propose a method to construct a new MLCA by modeling 1D FN-MLCA. This result is an extension of 1D 3-neighborhood CA and shows that more 1D MLCAs can be synthesized. The safety of the proposed algorithm is verified through various statistical analyses.