• Title/Summary/Keyword: Global 깊이 영상

Search Result 23, Processing Time 0.024 seconds

Multi-view Synthesis Algorithm for the Better Efficiency of Codec (부복호화기 효율을 고려한 다시점 영상 합성 기법)

  • Choi, In-kyu;Cheong, Won-sik;Lee, Gwangsoon;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.375-384
    • /
    • 2016
  • In this paper, when stereo image, satellite view and corresponding depth maps were used as the input data, we propose a new method that convert these data to data format suitable for compressing, and then by using these format, intermediate view is synthesized. In the transmitter depth maps are merged to a global depth map and satellite view are converted to residual image corresponding hole region as out of frame area and occlusion region. And these images subsampled to reduce a mount of data and stereo image of main view are encoded by HEVC codec and transmitted. In the receiver intermediate views between stereo image and between stereo image and bit-rate are synthesized using decoded global depth map, residual images and stereo image. Through experiments, we confirm good quality of intermediate views synthesized by proposed format subjectively and objectively in comparison to intermediate views synthesized by MVD format versus total bit-rate.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

An efficient multi-view video coding using correlation between multi-view video and depth map (다시점 비디오와 깊이 정보의 상판도를 이용한 효율적인 다시점 비디오 부호화 기법)

  • Bae, Byung-Kyu;Yun, Jung-Hwan;Kim, Dong-Wook;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.259-262
    • /
    • 2008
  • 본 논문에서는 다시점 비디오와 깊이 정보의 상관도를 이용해서 현재 JVT(joint video team)에서 표준화 된 다시점 비디오 부호화 (multi-view video coding : MVC)의 참조 소프트웨어인 JMVM(joint multi-view video model)을 기반으로 하여 효율적인 다시점 비디오 압축 방법을 제안한다. 기존의 일반적인 비디오 부호화 방식은 단일 시점에 대한 비디오 부호화 기술이기 때문에 다시점 비디오 전송을 위해서는 시점 당 각각 전송 채널에 필요하다. 하지만 다시점 비디오 부호화 기법을 이용하게 되면, 단일 전송 채널을 이용하여 전송이 가능하다. 본 논문에서 제안된 방법은 입력된 다시점 입력 영상과 해당 하는 깊이 정보를 이용하여 시점 간의 예측 방법의 효율성을 높였다. 다시점 입력 영상과 깊이 정보의 전역 변이 벡터 (global disparity vector : GDV)의 상관도를 이용하였으며, 다시점 영상과 깊이 정보를 동시에 전송해야 할 경우 복잡도를 낮출 수 있고, 약 $0.01{\sim}0.1dB$의 PSNR 이득을 얻을 수 있다.

  • PDF

The I-MCTBoost Classifier for Real-time Face Detection in Depth Image (깊이영상에서 실시간 얼굴 검출을 위한 I-MCTBoost)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.25-35
    • /
    • 2014
  • This paper proposes a method of boosting-based classification for the purpose of real-time face detection. The proposed method uses depth images to ensure strong performance of face detection in response to changes in lighting and face size, and uses the depth difference feature to conduct learning and recognition through the I-MCTBoost classifier. I-MCTBoost performs recognition by connecting the strong classifiers that are constituted from weak classifiers. The learning process for the weak classifiers is as follows: first, depth difference features are generated, and eight of these features are combined to form the weak classifier, and each feature is expressed as a binary bit. Strong classifiers undergo learning through the process of repeatedly selecting a specified number of weak classifiers, and become capable of strong classification through a learning process in which the weight of the learning samples are renewed and learning data is added. This paper explains depth difference features and proposes a learning method for the weak classifiers and strong classifiers of I-MCTBoost. Lastly, the paper presents comparisons of the proposed classifiers and the classifiers using conventional MCT through qualitative and quantitative analyses to establish the feasibility and efficiency of the proposed classifiers.

Illumination Compensation Algorithm based on Segmentation with Depth Information for Multi-view Image (깊이 정보를 이용한 영역분할 기반의 다시점 영상 조명보상 기법)

  • Kang, Keunho;Ko, Min Soo;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.935-944
    • /
    • 2013
  • In this paper, a new illumination compensation algorithm by segmentation with depth information is proposed to improve the coding efficiency of multi-view images. In the proposed algorithm, a reference image is first segmented into several layers where each layer is composed of objects with a similar depth value. Then we separate objects from each other even in the same layer by labeling each separate region in the layered image. Then, the labeled reference depth image is converted to the position of the distortion image view by using 3D warping algorithm. Finally, we apply an illumination compensation algorithm to each of matched regions in the converted reference view and distorted view. The occlusion regions that occur by 3D warping are also compensated by a global compensation method. Through experimental results, we are able to confirm that the proposed algorithm has better performance to improve coding efficiency.

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

Depth Generation using Bifocal Stereo Camera System for Autonomous Driving (자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법)

  • Lee, Eun-Kyung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1311-1316
    • /
    • 2021
  • In this paper, we present a bifocal stereo camera system combining two cameras with different focal length cameras to generate stereoscopic image and their corresponding depth map. In order to obtain the depth data using the bifocal stereo camera system, we perform camera calibration to extract internal and external camera parameters for each camera. We calculate a common image plane and perform a image rectification for generating the depth map using camera parameters of bifocal stereo camera. Finally we use a SGM(Semi-global matching) algorithm to generate the depth map in this paper. The proposed bifocal stereo camera system can performs not only their own functions but also generates distance information about vehicles, pedestrians, and obstacles in the current driving environment. This made it possible to design safer autonomous vehicles.

Camera Motion Estimation using Geometrically Symmetric Points in Subsequent Video Frames (인접 영상 프레임에서 기하학적 대칭점을 이용한 카메라 움직임 추정)

  • Jeon, Dae-Seong;Mun, Seong-Heon;Park, Jun-Ho;Yun, Yeong-U
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.2
    • /
    • pp.35-44
    • /
    • 2002
  • The translation and the rotation of camera occur global motion which affects all over the frame in video sequence. With the video sequences containing global motion, it is practically impossible to extract exact video objects and to calculate genuine object motions. Therefore, high compression ratio cannot be achieved due to the large motion vectors. This problem can be solved when the global motion compensated frames are used. The existing camera motion estimation methods for global motion compensation have a large amount of computations in common. In this paper, we propose a simple global motion estimation algorithm that consists of linear equations without any repetition. The algorithm uses information .of symmetric points in the frame of the video sequence. The discriminant conditions to distinguish regions belonging to distant view from foreground in the frame are presented. Only for the distant view satisfying the discriminant conditions, the linear equations for the panning, tilting, and zooming parameters are applied. From the experimental results using the MPEG test sequences, we can confirm that the proposed algorithm estimates correct global motion parameters. Moreover the real-time capability of the proposed technique can be applicable to many MPEG-4 and MPEG-7 related areas.