• Title/Summary/Keyword: depth information

Search Result 4,348, Processing Time 0.037 seconds

Touch Pen Using Depth Information

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.11
    • /
    • pp.1313-1318
    • /
    • 2015
  • Current touch pen requires the special equipments to detect a touch and its price increases in proportion to the screen size. In this paper, we propose a method for detecting a touch and implementing a pen using the depth information. The proposed method obtains a background depth image using a depth camera and extracts an object by comparing a captured depth image with the background depth image. Also, we determine a touch if the depth value of the object is the same as the background and then provide the pen event. Using this method, we can implement a cheaper and more convenient touch pen.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Fast Depth Video Coding with Intra Prediction on VVC

  • Wei, Hongan;Zhou, Binqian;Fang, Ying;Xu, Yiwen;Zhao, Tiesong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.7
    • /
    • pp.3018-3038
    • /
    • 2020
  • In the stereoscopic or multiview display, the depth video illustrates visual distances between objects and camera. To promote the computational efficiency of depth video encoder, we exploit the intra prediction of depth videos under Versatile Video Coding (VVC) and observe a diverse distribution of intra prediction modes with different coding unit sizes. We propose a hybrid scheme to further boost fast depth video coding. In the first stage, we adaptively predict the HADamard (HAD) costs of intra prediction modes and initialize a candidate list according to the HAD costs. Then, the candidate list is further improved by considering the probability distribution of candidate modes with different CU sizes. Finally, early termination of CU splitting is performed at each CU depth level based on the Bayesian theorem. Our proposed method is incorporated into VVC intra prediction for fast coding of depth videos. Experiments with 7 standard sequences and 4 Quantization parameters (Qps) validate the efficiency of our method.

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.

Accurate depth extraction in 3D integral imaging using sub-pixel registration information

  • Hong, Kee-Hoon;Hong, Ji-Soo;Park, Jae-Hyeung;Lee, Byoung-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1350-1353
    • /
    • 2009
  • Conventional depth extraction in integral imaging is based on the disparity information between the elemental images. Since the disparity is measured in pixel unit, however, the extracted depth is discrete, resulting in the quantization error. Moreover, the quantization error grows as the object depth increases, which limits the accuracy of the depth extraction for distant objects. In this paper, we propose a new method for depth extraction in integral imaging using sub-pixel registration information between subimages to obtain linear and accurate depth.

  • PDF

Optimized Multiple Description Lattice Vector Quantization Coding for 3D Depth Image

  • Zhang, Huiwen;Bai, Huihui;Liu, Meiqin;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1140-1154
    • /
    • 2015
  • Multiple Description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. Lattice vector quantization (LVQ) is a significant version of MD techniques to design an MD image coder. However, different from the traditional 2D texture image, the 3D depth image has its own special characteristics, which should be taken into account for efficient compression. In this paper, an optimized MDLVQ scheme is proposed in view of the characteristics of 3D depth image. First, due to the sparsity of depth image, the image blocks can be classified into edge blocks and smooth blocks, which are encoded by different modes. Furthermore, according to the boundary contents in edge blocks, the step size of LVQ can be regulated adaptively for each block. Experimental results validate the effectiveness of the proposed scheme, which show better rate distortion performance compared with the conventional MDLVQ.

Accelerated Generation Algorithm for an Elemental Image Array Using Depth Information in Computational Integral Imaging

  • Piao, Yongri;Kwon, Young-Man;Zhang, Miao;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • In this paper, an accelerated generation algorithm to effectively generate an elemental image array in computational integral imaging system is proposed. In the proposed method, the depth information of 3D object is extracted from the images picked up by a stereo camera or depth camera. Then, the elemental image array can be generated by using the proposed accelerated generation algorithm with the depth information of 3D object. The resultant 3D image generated by the proposed accelerated generation algorithm was compared with that the conventional direct algorithm for verifying the efficiency of the proposed method. From the experimental results, the accuracy of elemental image generated by the proposed method could be confirmed.

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

Depth location extraction and three-dimensional image recognition by use of holographic information of an object (홀로그램 정보를 이용한 깊이위치 추출과 3차원 영상인식)

  • 김태근
    • Korean Journal of Optics and Photonics
    • /
    • v.14 no.1
    • /
    • pp.51-57
    • /
    • 2003
  • The hologram of an object contains the information of the object's depth distribution as well as the depth location of the object. However these pieces of information are blended together as a form of fringe pattern. This makes it hard to extract the depth location of the object directly from the hologram. In this paper, I propose a numerical method which separates the depth location information from the single-sideband hologram by gaussian low-pass filtering. The depth location of the object is extracted by numerical analysis of the filtered hologram. The hologram at the object's depth location is recovered by the extracted depth location.