• Title/Summary/Keyword: 3D scene reconstruction

Search Result 64, Processing Time 0.021 seconds

Deep Window Detection in Street Scenes

  • Ma, Wenguang;Ma, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.855-870
    • /
    • 2020
  • Windows are key components of building facades. Detecting windows, crucial to 3D semantic reconstruction and scene parsing, is a challenging task in computer vision. Early methods try to solve window detection by using hand-crafted features and traditional classifiers. However, these methods are unable to handle the diversity of window instances in real scenes and suffer from heavy computational costs. Recently, convolutional neural networks based object detection algorithms attract much attention due to their good performances. Unfortunately, directly training them for challenging window detection cannot achieve satisfying results. In this paper, we propose an approach for window detection. It involves an improved Faster R-CNN architecture for window detection, featuring in a window region proposal network, an RoI feature fusion and a context enhancement module. Besides, a post optimization process is designed by the regular distribution of windows to refine detection results obtained by the improved deep architecture. Furthermore, we present a newly collected dataset which is the largest one for window detection in real street scenes to date. Experimental results on both existing datasets and the new dataset show that the proposed method has outstanding performance.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

Full-color Non-hogel-based Computer-generated Hologram from Light Field without Color Aberration

  • Min, Dabin;Min, Kyosik;Park, Jae-Hyeung
    • Current Optics and Photonics
    • /
    • v.5 no.4
    • /
    • pp.409-420
    • /
    • 2021
  • We propose a method to synthesize a color non-hogel-based computer-generated-hologram (CGH) from light field data of a three-dimensional scene with a hologram pixel pitch shared for all color channels. The non-hogel-based CGH technique generates a continuous wavefront with arbitrary carrier wave from given light field data by interpreting the ray angle in the light field to the spatial frequency of the plane wavefront. The relation between ray angle and spatial frequency is, however, dependent on the wavelength, which leads to different spatial frequency sampling grid in the light field data, resulting in color aberrations in the hologram reconstruction. The proposed method sets a hologram pixel pitch common to all color channels such that the smallest blue diffraction angle covers the field of view of the light field. Then a spatial frequency sampling grid common to all color channels is established by interpolating the light field with the spatial frequency range of the blue wavelength and the sampling interval of the red wavelength. The common hologram pixel pitch and light field spatial frequency sampling grid ensure the synthesis of a color hologram without any color aberrations in the hologram reconstructions, or any loss of information contained in the light field. The proposed method is successfully verified using color light field data of various test or natural 3D scenes.

A Study on the Reproducibility of 3D Shape Model of Garden Cultural Heritage using Photogrammetry with SNS Photographs - Focused on Soswaewon Garden, Damyang(Scenic Site No.40) - (SNS 사진과 사진측량을 이용한 정원유산의 3차원 형상 재현 가능성 연구 - 명승 제40호 담양 소쇄원(潭陽 瀟灑園)을 대상으로 -)

  • Kim, Choong-Sik;Lee, Sang-Ha
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.4
    • /
    • pp.94-104
    • /
    • 2018
  • This study examined photogrammetric reconstruction techniques that can measure the original form of a cultural property utilizing photographs taken in the past. During the research process, photographs taken in the past as well as photograph on the internet of Soswaewon Garden in Damyang(scenic site 40) were collected and utilized. The landscaping structures of Maedae, Aiyangdan, Ogokmun Wall, and Yakjak and natural scenery Gwangseok, of which photographs can be taken from any 360 degree direction from a close distance or a far distance without any barriers in the way, were selected and tested for the possibility of reproducing three-dimensional shapes. The photography method of 151 landscape photographs (58.6%) from internet portal sites for the aforementioned five landscape subjects containing information on the date the photograph was taken, focal length, and exposure were analyzed. As a result of the analysis, it was revealed that the majority of the photographs tend to focus on important parts of each subject. In addition, we discovered that there are two or three photography methods that internet users preferred in regards to each landscape subject. For the purposes of the experiment, photographs in which a single scene consistently appears for each landscape subject and it was determined that there was a high level of preference related to the photography method were analyzed, and three-dimensional mesh shape model was produced with a photoscan program to analyze the reproducibility of three-dimensional shapes. Based on the results of the reproduction, it was relatively possible to reproduce three-dimensional shapes for artifacts such as Ogukmun wall, Maedae, and Aeyangdan, but it was impossible to reproduce three-dimensional images for natural scenery or an object that has similar texture such as Yakjak and Gwangseok. As a result of experimentation related to the reconstruction of three-dimensional shapes with the photographs taken on site using a photography method similar to that of the photographs selected as previously mentioned, there was success related to reproducing the three-dimensional shapes of Yakjak and Gwangseok, of which it was not possible to do so through the photographs that had been collected previously. In addition, through comparison of past and present images, it was possible to measure the exact sizes as well as discover any changes that have taken place. If past photographs taken by tourists or landscape architects of cultural properties can be obtained, the three-dimensional shapes from a particular period of time can be reproduced. If this technology becomes widespread, it will increase the level of accuracy and reliability in regards to measuring the past shapes of cultural landscape properties and examining any changes to the properties.