• Title/Summary/Keyword: 3차원 장면 복원

Search Result 21, Processing Time 0.023 seconds

3D Reconstruction using a Moving Planar Mirror (움직이는 평면거울을 이용한 3차원 물체 복원)

  • 장경호;이동훈;정순기
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1543-1550
    • /
    • 2004
  • Modeling from images is a cost-effective means of obtaining 3D geometric models. These models can be effectively constructed from classical Structure from Motion algorithm. However, it's too difficult to reconstruct whole scenes using SFM method since general sites contain a very complex shapes and brilliant colours. To overcome this difficulty, the current paper proposes a new reconstruction method based on a moving Planar mirror. We devise the mirror posture instead of scene itself as a cue for reconstructing the geometry That implies that the geometric cues are inserted into the scene by compulsion. With this method, we can obtain the geometric details regardless of the scene complexity. For this purpose, we first capture image sequences through the moving mirror containing the interested scene, and then calibrate the camera through the mirror's posture. Since the calibration results are still inaccurate due to the detection error, the camera pose is revised using frame-correspondence of the comer points that are easily obtained using the initial camera posture. Finally, 3D information is computed from a set of calibrated image sequences. We validate our approach with a set of experiments on some complex objects.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

The Design of Object-based 3D Audio Broadcasting System (객체기반 3차원 오디오 방송 시스템 설계)

  • 강경옥;장대영;서정일;정대권
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.592-602
    • /
    • 2003
  • This paper aims to describe the basic structure of novel object-based 3D audio broadcasting system To overcome current uni-directional audio broadcasting services, the object-based 3D audio broadcasting system is designed for providing the ability to interact with important audio objects as well as realistic 3D effects based on the MPEG-4 standard. The system is composed of 6 sub-modules. The audio input module collects the background sound object, which is recored by 3D microphone, and audio objects, which are recorded by monaural microphone or extracted through source separation method. The sound scene authoring module edits the 3D information of audio objects such as acoustical characteristics, location, directivity and etc. It also defines the final sound scene with a 3D background sound, which is intended to be delievered to a receiving terminal by producer. The encoder module encodes scene descriptors and audio objects for effective transmission. The decoder module extracts scene descriptors and audio objects from decoding received bistreams. The sound scene composition module reconstructs the 3D sound scene with scene descriptors and audio objects. The 3D sound renderer module maximizes the 3D sound effects through adapting the final sound to the listner's acoustical environments. It also receives the user's controls on audio objects and sends them to the scene composition module for changing the sound scene.

Using Robust Surface Normal Vector Acquisition Method (잡음에 강건한 표면 법선 벡터 획득 방법을 이용한 차원 장면 복원)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.4-5
    • /
    • 2016
  • 최근 현실 세계의 기반 위에 가상의 정보를 증강하여 사용자와 상호작용하며 즐기는 증강 현실 컨텐츠가 대중들에게 많은 인기를 얻고 있다. 이러한 증강 현실 콘텐츠는 현실 세계를 기반으로 한다는 점에서 실제의 3차원 공간을 정확하게 복원하는 것이 중요하다. 초기의 3차원 복원 방법으로 RGB-D 카메라를 이용한 KinectFusion 방법이 제안되었고 많은 연구자들에 의해 다루어지고 있다. 하지만 기존의 방법은 시간이 흐름에 따라 누적되는 오차에 의해 3차원 모델이 정확하게 복원되지 않는 객체 표류 문제가 발생한다. 이러한 문제는 깊이 카메라 센서의 잡음 때문에 정확하지 않은 표면 법선 벡터가 계산되는 것에 기인한다. 본 논문에서는 이러한 문제를 해결하기 위해 잡음에 강건한 표면 법선 벡터를 계산하는 방법을 제안한다. 실험결과에서는 기존의 방법과 비교하여 제안하는 방법이 절대 궤적 오차 (absolute trajectory error)가 감소하는 것을 확인 했고 카메라 궤적이 정확하게 예측되는 것을 확인할 수 있었다.

  • PDF

Adaptive Image Pair Resection for Incremental Structure from Motion (점진적 움직임 기반 구조를 위한 적응적인 영상 켤레 제거 방법)

  • Ko, Jaeryun;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.188-189
    • /
    • 2017
  • 점진적 움직임 기반 구조(Incremental Structure from Motion)는 다양한 시점에서 촬영한 영상들을 하나 씩 점진적으로 추가하여 3차원 장면을 복원하는 방법이다. 3차원 구조 복원에 사용되는 영상 켤레들 중에는 불필요한 켤레들도 충분히 포함되어 있으므로 복원된 구조의 불안정성과 불필요한 영상 켤레 처리로 인한 성능 손실이 발생할 수 있다. 이 논문은 상대적으로 불필요한 영상 켤레를 입력 영상 집합에 맞게 적응적으로 제거하는 방법을 제안한다. 대응점 탐색 단계에서 기하학적 검증작업 전후로 총 두 번의 영상 켤레 제거가 실행되며, 통계적인 방법 및 기하학적으로 검증된 대응점 비율을 이용하여 문턱치를 결정한다. 실험 결과 3차원 복원 결과에 지장을 주지 않으면서 복원에 필요한 영상 켤레 개수를 효과적으로 줄일 수 있었다.

  • PDF

A Constrained Self-Calibration Technique (제약 조건을 적용한 셀프 캘리브레이션 방법)

  • Kim, Seong-Yong;Han, Jun-Hui
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.4
    • /
    • pp.358-368
    • /
    • 2001
  • 셀프 캘리브레이션은 영상 시퀀스에 대한 특징점 정합 결과를 이용하여 카메라 내부 파라미터를 계산하는 기법이다. 이는 임의로 움직이는 카메라를 이용하여 얻은 영상 시퀀스를 이용하여 유클리디안 복원을 수행하는데 응용될 수 있다. 안정적인 3차원 복원결과를 얻기 위하여 본 논문에서는 두 가지 제약 조건을 사용한다(카메라 내부 파라미터의 개수에 대한 제약 조건과 복원할 장면의 기하학적 구조를 이용한 제약 조건). 카메라 내부 파라미터에 대한 제약 조건은 카메라의 하드웨어적인 특성을 반영하며 이러한 제약 조건을 적용함으로써 셀프 캘리브레이션 중 비선형 최적화 과정의 수렴도를 높일 수 있다. 또, 기하학적 제약 조건은 대상 장면의 직각 구조를 이용하여 이에 대한 조건을 분석하여 제약 조건에 대한 수식을 유도한 다음 이를 최적화 과정에 포함시킨다. 합성 영상과 다양한 종류의 실제 영상에 대한 실험을 통하여 본 논문에서 제안된 방법을 이용하면 개선된 유클리디안 복원 결과를 얻을 수 있음을 보인다.

  • PDF

Video Augmentation of Virtual Object by Uncalibrated 3D Reconstruction from Video Frames (비디오 영상에서의 비보정 3차원 좌표 복원을 통한 가상 객체의 비디오 합성)

  • Park Jong-Seung;Sung Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.421-433
    • /
    • 2006
  • This paper proposes a method to insert virtual objects into a real video stream based on feature tracking and camera pose estimation from a set of single-camera video frames. To insert or modify 3D shapes to target video frames, the transformation from the 3D objects to the projection of the objects onto the video frames should be revealed. It is shown that, without a camera calibration process, the 3D reconstruction is possible using multiple images from a single camera under the fixed internal camera parameters. The proposed approach is based on the simplification of the camera matrix of intrinsic parameters and the use of projective geometry. The method is particularly useful for augmented reality applications to insert or modify models to a real video stream. The proposed method is based on a linear parameter estimation approach for the auto-calibration step and it enhances the stability and reduces the execution time. Several experimental results are presented on real-world video streams, demonstrating the usefulness of our method for the augmented reality applications.

  • PDF

Recent Trends of Weakly-supervised Deep Learning for Monocular 3D Reconstruction (단일 영상 기반 3차원 복원을 위한 약교사 인공지능 기술 동향)

  • Kim, Seungryong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • Estimating 3D information from a single image is one of the essential problems in numerous applications. Since a 2D image inherently might originate from an infinite number of different 3D scenes, thus 3D reconstruction from a single image is notoriously challenging. This challenge has been overcame by the advent of recent deep convolutional neural networks (CNNs), by modeling the mapping function between 2D image and 3D information. However, to train such deep CNNs, a massive training data is demanded, but such data is difficult to achieve or even impossible to build. Recent trends thus aim to present deep learning techniques that can be trained in a weakly-supervised manner, with a meta-data without relying on the ground-truth depth data. In this article, we introduce recent developments of weakly-supervised deep learning technique, especially categorized as scene 3D reconstruction and object 3D reconstruction, and discuss limitations and further directions.