• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.025 seconds

ROI-Based 3D Video Stabilization Using Warping (관심영역 기반 와핑을 이용한 3D 동영상 안정화 기법)

  • Lee, Tae-Hwan;Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.76-82
    • /
    • 2012
  • As the portable camcorder becomes popular, various video stabilization algorithms for de-shaking of camera motion have been developed. In the past, most video stabilization algorithms were based on 2-dimensional camera motion, but recent algorithms show much better performance by considering 3-dimensional camera motion. Among the previous video stabilization algorithms, 3D video stabilization algorithm using content-preserving warps is known as the state-of-the art owing to its superior performance. But, the major demerit of the algorithm is its high computational complexity. So, we present a computationally light full-frame warping algorithm based on ROI (region-of-interest) while providing comparable visual quality to the state-of-the art in terms of ROI. First, a proper ROI with a target depth is chosen for each frame, and full-frame warping based on the selected ROI is applied.

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

Development of Remote Measurement Method for Reinforcement Information in Construction Field Using 360 Degrees Camera (360도 카메라 기반 건설현장 철근 배근 정보 원격 계측 기법 개발)

  • Lee, Myung-Hun;Woo, Ukyong;Choi, Hajin;Kang, Su-min;Choi, Kyoung-Kyu
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.6
    • /
    • pp.157-166
    • /
    • 2022
  • Structural supervision on the construction site has been performed based on visual inspection, which is highly labor-intensive and subjective. In this study, the remote technique was developed to improve the efficiency of the measurements on rebar spacing using a 360° camera and reconstructed 3D models. The proposed method was verified by measuring the spacings in reinforced concrete structure, where the twelve locations in the construction site (265 m2) were scanned within 20 seconds per location and a total of 15 minutes was taken. SLAM, consisting of SIFT, RANSAC, and General framework graph optimization algorithms, produces RGB-based 3D and 3D point cloud models, respectively. The minimum resolution of the 3D point cloud was 0.1mm while that of the RGB-based 3D model was 10 mm. Based on the results from both 3D models, the measurement error was from 10.8% to 0.3% in the 3D point cloud and from 28.4% to 3.1% in the RGB-based 3D model. The results demonstrate that the proposed method has great potential for remote structural supervision with respect to its accuracy and objectivity.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

A Study on the 3D Shape Reconstruction Algorithm of an Indoor Environment Using Active Stereo Vision (능동 스테레오 비젼을 이용한 실내환경의 3차원 형상 재구성 알고리즘)

  • Byun, Ki-Won;Joo, Jae-Heum;Nam, Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.1
    • /
    • pp.13-22
    • /
    • 2009
  • In this paper, we propose the 3D shape reconstruction method that combine the mosaic method and the active stereo matching using the laser beam. The active stereo matching method detects the position information of the irradiated laser beam on object by analyzing the color and brightness variation of left and right image, and acquires the depth information in epipolar line. The mosaic method extracts feature point of image by using harris comer detection and matches the same keypoint between the sequence of images using the keypoint descriptor index method and infers correlation between the sequence of images. The depth information of the sequence image was calculated by the active stereo matching and the mosaic method. The merged depth information was reconstructed to the 3D shape information by wrapping and blending with image color and texture. The proposed reconstruction method could acquire strong the 3D distance information, and overcome constraint of place and distance etc, by using laser slit beam and stereo camera.

  • PDF

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Real-time Virtual-viewpoint Image Synthesis Algorithm Using Kinect Camera

  • Lee, Gyu-Cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.3
    • /
    • pp.1016-1022
    • /
    • 2014
  • Kinect is a motion sensing camera released by Microsoft in November 2010 for the Xbox360 that is used to produce depth and color images. Because Kinect uses an infrared pattern, it generates holes and noises around an object's boundaries in the obtained images. The flickering phenomenon and unmatched edges also occur. In this paper, we propose a real time virtual-view video synthesis algorithm which results in a high quality virtual view by solving these problems stated above. The experimental results show that the proposed algorithm performs much better than the conventional algorithms.

Implementation of an Underwater ROV for Detecting Foreign Objects in Water

  • Lho, Tae-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.61-66
    • /
    • 2021
  • An underwater remotely operated vehicle (ROV) has been implemented. It can inspect foreign substances through a CCD camera while the ROV is running in water. The maximum thrust of the ROV's running thruster is 139.3 N, allowing the ROV to move forward and backward at a running speed of 1.03 m/s underwater. The structural strength of the guard frame was analyzed when the ROV collided with a wall while traveling at a speed of 1.03 m/s underwater, and found to be safe. The maximum running speed of the ROV is 1.08 m/s and the working speed is 0.2 m/s in a 5.8-m deep-water wave pool, which satisfies the target performance. As the ROV traveled underwater at a speed of 0.2 m/s, the inspection camera was able to read characters that were 3 mm in width at a depth of 1.5 m, which meant it could sufficiently identify foreign objects in the water.

A Study on Sound Synchronized Out-Focusing Techniques for 3D Animation (음원 데이터를 활용한 3D 애니메이션 카메라 아웃포커싱 표현 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.57-65
    • /
    • 2014
  • The role of sound in producing 3D animation clip is one of the important factor to maximize the immersive effects of the scene. Especially interaction between video and sound makes the scene expressions more apparent, which is diversely applied in video production. One of these interaction techniques, the out-focussing technique is frequently used in both real video and 3D animation field. But in 3D animation, out-focussing is not easily implemented as in music videos or explosion scenes in real video shots. This paper analyzes the sound data to synchronize the depth of field with it. The novel out-focussing technique is proposed, where the object's field of depth is controlled by beat rhythm in the sound data.