• Title/Summary/Keyword: Depth extraction

Search Result 390, Processing Time 0.025 seconds

Depth Map Extraction from the Single Image Using Pix2Pix Model (Pix2Pix 모델을 활용한 단일 영상의 깊이맵 추출)

  • Gang, Su Myung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.547-557
    • /
    • 2019
  • To extract the depth map from a single image, a number of CNN-based deep learning methods have been performed in recent research. In this study, the GAN structure of Pix2Pix is maintained. this model allows to converge well, because it has the structure of the generator and the discriminator. But the convolution in this model takes a long time to compute. So we change the convolution form in the generator to a depthwise convolution to improve the speed while preserving the result. Thus, the seven down-sizing convolutional hidden layers in the generator U-Net are changed to depthwise convolution. This type of convolution decreases the number of parameters, and also speeds up computation time. The proposed model shows similar depth map prediction results as in the case of the existing structure, and the computation time in case of a inference is decreased by 64%.

Fast Extraction of Objects of Interest from Images with Low Depth of Field

  • Kim, Chang-Ick;Park, Jung-Woo;Lee, Jae-Ho;Hwang, Jenq-Neng
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.353-362
    • /
    • 2007
  • In this paper, we propose a novel unsupervised video object extraction algorithm for individual images or image sequences with low depth of field (DOF). Low DOF is a popular photographic technique which enables the representation of the photographer's intention by giving a clear focus only on an object of interest (OOI). We first describe a fast and efficient scheme for extracting OOIs from individual low-DOF images and then extend it to deal with image sequences with low DOF in the next part. The basic algorithm unfolds into three modules. In the first module, a higher-order statistics map, which represents the spatial distribution of the high-frequency components, is obtained from an input low-DOF image. The second module locates the block-based OOI for further processing. Using the block-based OOI, the final OOI is obtained with pixel-level accuracy. We also present an algorithm to extend the extraction scheme to image sequences with low DOF. The proposed system does not require any user assistance to determine the initial OOI. This is possible due to the use of low-DOF images. The experimental results indicate that the proposed algorithm can serve as an effective tool for applications, such as 2D to 3D and photo-realistic video scene generation.

  • PDF

Effect of process parameters on the recovery of thorium tetrafluoride prepared by hydrofluorination of thorium oxide, and their optimization

  • Kumar, Raj;Gupta, Sonal;Wajhal, Sourabh;Satpati, S.K.;Sahu, M.L.
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1560-1569
    • /
    • 2022
  • Liquid fueled molten salt reactors (MSRs) have seen renewed interest because of their inherent safety features, higher thermal efficiency and potential for efficient thorium utilisation for power generation. Thorium fluoride is one of the salts used in liquid fueled MSRs employing Th-U cycle. In the present study, ThF4 was prepared by hydro-fluorination of ThO2 using anhydrous HF gas. Process parameters viz. bed depth, hydrofluorination time and hydrofluorination temperature, were optimized for the preparation of ThF4 in a static bed reactor setup. The products were characterized with X-Ray diffraction and experimental conditions for complete conversion to ThF4 were established which also corroborated with the yield values. Hydrofluorination of ThO2 at 450 ℃ for half an hour at a bed depth of 6 mm gave the best result, with a yield of about 99.36% ThF4. No unconverted oxide or any other impurity was observed. Rietveld refinement was performed on the XRD data of this ThF4, and Chi2 value of 3.54 indicated good agreement between observed and calculated profiles.

Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing

  • Lee, Min-Chul;Inoue, Kotaro;Konishi, Naoki;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.4
    • /
    • pp.275-279
    • /
    • 2015
  • There are several methods to record three dimensional (3D) information of objects such as lens array based integral imaging, synthetic aperture integral imaging (SAII), computer synthesized integral imaging (CSII), axially distributed image sensing (ADS), and axially distributed stereo image sensing (ADSS). ADSS method is capable of recording partially occluded 3D objects and reconstructing high-resolution slice plane images. In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental stereo image pairs are recorded by simply moving the stereo camera along the optical axis and the recorded elemental image pairs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement and simple block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.

Study on light extraction efficiency of a side-etched LED (측면 식각된 LED의 광추출 효율에 관한 연구)

  • Noh, Y.K.;Kwon, K.Y.
    • Korean Journal of Optics and Photonics
    • /
    • v.14 no.2
    • /
    • pp.122-129
    • /
    • 2003
  • In the case of a AIGalnP/GaP system rectangular parallelepiped high brightness LED which has side walls etched to be slanted off the vertical direction, we have studied the effects of lossy electrodes and material absorption and etching depth and angle of side walls on its light extraction efficiency. If LEDs have no electrodes, in order to obtain an 80% light extraction efficiency of a TIP (truncated inverted pyramid) LED, the side-etched LEDs should have an etching angle of 22$^{\circ}$~45$^{\circ}$ and an etching depth of 8~17% of a dice height and an absorption coefficient less than 1 $cm^{-1}$ / In case of etching depth of 16~39% of a dice height, we can obtain a 90% light extraction efficiency of a TIP LED. But when LEDs have two electrodes and no absorption loss, in order to obtain an 80% light extraction efficiency of a TIP LEBs, the side-etched LEDs should have an etching angle of 25$^{\circ}$-45$^{\circ}$ and an etching depth of 30~36% of a dice height. In case of etching depth of 57~71% of a dice height, we can obtain a 90% light extraction efficiency of a TIP LED.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.

An extraction of depth information in pattern using directions and slopes (방향과 경사도 분포를 이용한 패턴의 굴곡 성분 추출)

  • Jeon, H.J.;Cho, D.S.;Kim, B.C.
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.462-464
    • /
    • 1992
  • In this paper, an extraction of depth intonation in pattern using neural network is presented. All the 3D images represent the depth information in grey pixels. This pixels which have analog values translated digital values. Because of the noise and distortion in pattern, we use the normalization in learning and recalling the patterns. Our method has eight direction vectors and slopes for pattern. Also, we use potential to obtain the mean slope and direction vectors of given 3D patches. The higher level of deduction finding the global depth information is also carried out by using neural network.

  • PDF

An Image Coding Algorithm for the Representation of the Set of the Zoom Images (Zoom 영상 표현을 위한 영상 코딩 알고리듬)

  • Jang, Bo-Hyeon;Kim, Do-Hyeon;Yang, Yeong-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.498-508
    • /
    • 2001
  • In this paper, we propose an efficient coding algorithm for the zoom images to find the optimal depth and texture information. The proposed algorithm is the area-based method consisting of two consecutive steps, i) the depth extraction step and ii) the texture extraction step. The X-Y plane of the object space is divided into triangular patches and the depth value of the node is determined in the first step and then the texture of the each patch is extracted in the second step. In the depth extraction step, the depth of the node is determined by applying the block-based disparity compensation method to the windowed area centered at the node. In the second step, the texture of the triangular patches is extracted from the zoom images by applying the affine transformation based disparity compensation method to the triangular patches with the depth value extracted from the first step. To improve the quality of image, the interpolation is peformed on the object space instead of the interpolation on the image plane.

  • PDF