• Title/Summary/Keyword: 3D Scene Reconstruction

Search Result 64, Processing Time 0.027 seconds

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

3D Shape Reconstruction of Non-Lambertian Surface (Non-Lambertian면의 형상복원)

  • 김태은;이말례
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.1
    • /
    • pp.26-36
    • /
    • 1998
  • It is very important study field in computer vision 'How we obtain 3D information from 2D image'. For this purpose, we must know position of camera, direction of light source, and surface reflectance property before we take the image, which are intrinsic information of the object in the scene. Among them, surface reflectance property presents very important clues. Most previous researches assume that objects have only Lambertian reflectance, but many real world objects have Non-Lambertian reflectance property. In this paper the new method for analyzing the properties of surface reflectance and reconstructing the shape of object through estimation of reflectance parameters is proposed. We have interest in Non-Lambertian reflectance surface that has specular reflection and diffuse reflection which can be explained by Torrance-Sparrow model. Photometric matching method proposed in this paper is robust method because it match reference image and object image considering the neighbor brightness distribution. Also in this thesis, the neural network based shaped reconstruction method is proposed, which can be performed in the absence of reflectance information. When brightness obtained by each light is inputted, neural network is trained by surface normal and can determine the surface shape of object.

  • PDF

An effective indoor video surveillance system based on wide baseline cameras (Wide baseline 카메라 기반의 효과적인 실내공간 감시시스템)

  • Kim, Woong-Chang;Kim, Seung-Kyun;Choi, Kang-A;Jung, June-Young;Ko, Sung-Jea
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.317-323
    • /
    • 2010
  • The video surveillance system is adopted in many places due to its efficiency and constancy in monitoring a specific area over a long period of time. However, many surveillance systems composed of a single static camera often produce unsatisfactory results due to their lack of field of view. In this paper, we present a video surveillance system based on wide baseline stereo cameras to overcome the limitation. We adopt the codebook algorithm and mathematical morphology to robustly model the foreground pixels of the moving object in the scene and calculate the trajectory of the moving object via 3D reconstruction. The experimental results show that the proposed system detects a moving object and generates a top view trajectory successfully to track the location of the object in the world coordinates.

AI-Based Object Recognition Research for Augmented Reality Character Implementation (증강현실 캐릭터 구현을 위한 AI기반 객체인식 연구)

  • Seok-Hwan Lee;Jung-Keum Lee;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1321-1330
    • /
    • 2023
  • This study attempts to address the problem of 3D pose estimation for multiple human objects through a single image generated during the character development process that can be used in augmented reality. In the existing top-down method, all objects in the image are first detected, and then each is reconstructed independently. The problem is that inconsistent results may occur due to overlap or depth order mismatch between the reconstructed objects. The goal of this study is to solve these problems and develop a single network that provides consistent 3D reconstruction of all humans in a scene. Integrating a human body model based on the SMPL parametric system into a top-down framework became an important choice. Through this, two types of collision loss based on distance field and loss that considers depth order were introduced. The first loss prevents overlap between reconstructed people, and the second loss adjusts the depth ordering of people to render occlusion inference and annotated instance segmentation consistently. This method allows depth information to be provided to the network without explicit 3D annotation of the image. Experimental results show that this study's methodology performs better than existing methods on standard 3D pose benchmarks, and the proposed losses enable more consistent reconstruction from natural images.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Real-time Virtual Volumetric Scene Reconstruction System from Multiple Video Streaming (다중 비디오 영상을 이용한 실시간 가상공간 영상 재구성 시스템)

  • Choi, Hyok-S.;Han, Tae-Woo;Lee, Ju-Ho;Yang, Hyun-S.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.7-10
    • /
    • 2003
  • 근래의 컴퓨터 그래픽스 분야의 중요 목표 중 하나는 동적으로 변화하는 3 차원 가상 공간을 재현해 내는 것이다. 일반적으로 공간정보 취득 기술은 특수한 하드웨어나 환경을 전제로 하는 능동적인 방법과, 특정 환경을 전제하지 않으나 상대적으로 계산 복잡도가 높은 수동적 방법으로 나뉜다. 이 논문에서는 수동적 알고리즘의 계산 복잡도를 개선하여 특수한 환경이나 물리적인 전제 없이 비교적 간단한 하드웨어를 이용하여 정보 취득 후 되도록 짧은 시간(latency)내에 가상 영상을 재구성하는 시스템을 설계 구현한다.

  • PDF

The Transmission Electron Microscopic Study on the Alteration of Filtration Barrier in Aged Rat Kidney (흰쥐 콩팥여과관문의 노화 변화에 관한 투과전자현미경적 연구)

  • Lee, Se-Jung;Lim, Hyoung-Soo;Lim, Do-Seon;Hwang, Douk-Ho
    • Applied Microscopy
    • /
    • v.38 no.2
    • /
    • pp.107-115
    • /
    • 2008
  • The filtration barrier of kidney consists of endothelial cell, glomerular capillary, glomerular basement membrane, mesangial matrix, and podocyte. In aged rats, the morphological changes were shown in various parts, including the glomerulus. These changes were thickening of basement membrane and mesangial matrix, crescent formation of glomerular capillary, deformity of foot processes, glomerular sclerosis and obsolescence. But these glomerular morphologies are partial images or few serial images analysis. In this study, we examined the morphological alteration of glomerulus in the young and aged rats by light microscopy, transmission electron microscopy and three dimensional reconstruction. We were found in aged rat glomerulus, expansion of urinary space and mesangial matrix, thickening and degrading of glomerular basement membrane, decreasing in podocyte foot processes, fragmentation of podocytic nucleus membrane. These observations indicate that may provide useful data for investigating the pathogenesis of age-related dysfunction of kidney.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.