• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.028 seconds

Obtaining 3-D Depth from a Monochrome Shaded Image (단시안 명암강도를 이용한 물체의 3차원 거리측정)

  • Byung Il Kim
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.7
    • /
    • pp.52-61
    • /
    • 1992
  • An iterative scheme for computing the three-dimensional position and the surface orientation of an opaque object from a singel shaded image is proposed. This method demonstrates that calculating the depth(distance) between the camera and the object from one shaded video image is possible. Most previous research works on $'Shape from Shading$' problem, even in the $'Photometric Stereo Method$', invoved the determination of surface orientation only. To measure the depth of an object, depth of the object, and the reflectance properties of the surface. Assuming that the object surface is uniform Lambertian the measured intensity level at a given image pixel*x,y0becomes a function of surface orientation and depth component of the object. Derived Image Irradiance Equation can`t be solved without further informations since three unknown variables(p,q and D) are in one nonlinear equation. As an additional constraints we assume that surface satisfy smoothness conditions. Then equation can be solved relaxatively using standard methods of TEX>$'Calculus of VariationTEX>$'. After checking the sensitivity of the algorithm to the errors ininput parameters, the theoretical results is tested by experiments. Three objects (plane, cylinder, and sphere)are used. Thees initial results are very encouraging since they match the theoretical calculations within 20$\%$ error in simple experiments.> error in simple experiments.

  • PDF

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Depth Map Using New Single Lens Stereo (단안렌즈 스테레오를 이용한 깊이 지도)

  • Changwun Ku;Junghee Jeon;Kim, Choongwon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1157-1163
    • /
    • 2000
  • In this paper, we present a novel and practical stereo vision system that uses only one camera and four mirrors placed in front of the camera. The equivalent of a stereo pair of images are formed as left and right halves of a single CCD image by using four mirrors placed in front of the ten of a CCD camera. An object arbitrary point in 3D space is transformed into two virtual points by the four mirrors. As in the conventional stereo system, the displacement between the two conjugate image points of the two virtual points is directly related to the depth of the object point. This system has the following advantages over traditional two camera stereo that identical system parameters, easy calibration and easy acquisition of stereo data.

  • PDF

A Study on Control of Drone Swarms Using Depth Camera (Depth 카메라를 사용한 군집 드론의 제어에 대한 연구)

  • Lee, Seong-Ho;Kim, Dong-Han;Han, Kyong-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.8
    • /
    • pp.1080-1088
    • /
    • 2018
  • General methods of controlling a drone are divided into manual control and automatic control, which means a drone moves along the route. In case of manual control, a man should be able to figure out the location and status of a drone and have a controller to control it remotely. When people control a drone, they collect information about the location and position of a drone with the eyes and have its internal information such as the battery voltage and atmospheric pressure delivered through telemetry. They make a decision about the movement of a drone based on the gathered information and control it with a radio device. The automatic control method of a drone finding its route itself is not much different from manual control by man. The information about the position of a drone is collected with the gyro and accelerator sensor, and the internal information is delivered to the CPU digitally. The location information of a drone is collected with GPS, atmospheric pressure sensors, camera sensors, and ultrasound sensors. This paper presents an investigation into drone control by a remote computer. Instead of using the automatic control function of a drone, this approach involves a computer observing a drone, determining its movement based on the observation results, and controlling it with a radio device. The computer with a Depth camera collects information, makes a decision, and controls a drone in a similar way to human beings, which makes it applicable to various fields. Its usability is enhanced further since it can control common commercial drones instead of specially manufactured drones for swarm flight. It can also be used to prevent drones clashing each other, control access to a drone, and control drones with no permit.

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.

A Low Cost 3D Skin Wrinkle Reconstruction System Based on Stereo Semi-Dense Matching (반 밀집 정합에 기반한 저가형 3차원 주름 데이터 복원)

  • Zhang, Qian;WhangBo, Taeg-Keun
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.25-33
    • /
    • 2009
  • In the paper, we proposed a new system to retrieve 3D wrinkle data based on stereo images. Usually, 3D reconstruction based on stereo images or video is very popular and it is the research focus, which has been applied for culture heritage, building and other scene. The target is object measurement, the scene depth calculation and 3D data obtained. There are several challenges in our research. First, it is hard to take the full information wrinkle images by cameras because of light influence, skin with non-rigid object and camera performance. We design a particular computer vision system to take winkle images with a long length camera lens. Second, it is difficult to get the dense stereo data because of the hard skin texture image segmentation and corner detection. We focus on semi-dense stereo matching algorithm for the wrinkle depth. Compared with the 3D scanner, our system is much cheaper and compared with the physical modeling based method, our system is more flexible with high performance.

  • PDF

3D SCENE EDITING BY RAY-SPACE PROCESSING

  • Lv, Lei;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.732-736
    • /
    • 2009
  • In this paper we focus on EPI (Epipolar-Plane Image), the horizontal cross section of Ray-Space, and we propose a novel method that chooses objects we want and edits scenes by using multi-view images. On the EPI acquired by camera arrays uniformly distributed along a line, all the objects are represented as straight lines, and the slope of straight lines are decided by the distance between objects and camera plane. Detecting a straight line of a specific slope and removing it mean that an object in a specific depth has been detected and removed. So we propose a scheme to make a layer of a specific slope compete with other layers instead of extracting layers sequentially from front to back. This enables an effective removal of obstacles, object manipulation and a clearer 3D scene with what we want to see will be made.

  • PDF

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

Intermediate Image Generation of Stereo Image Using Depth Information and Block-based Matching Method (깊이정보와 블록기반매칭을 이용한 스테레오 영상의 중간영상 생성)

  • 양광원;허경무;김장기
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.10
    • /
    • pp.874-880
    • /
    • 2002
  • A number of techniques have been proposed for 3D display using view-difference of two eyes. These methods do not express enough reality like real world. The display images have to change according to the position of a viewer to improve reality. In this paper, we present an approach for generating intermediate image between two different view images by applying new image interpolation algorithm The interpolation algorithm is designed to cope with complex shapes. The proposed image interpolation algorithm generates rotated image about vertical axes by any angle from base images. Each base image that was obtained from CCD camera has an view-angle difference of $3^{\circ}C$, $5.5^{\circ}C$, $^{\circ}C$, $22^{\circ}C$, and $45^{\circ}C$. The proposed into mediate image generation method uses the geometric analysis of image and depth information through the block-based matching method.