• Title/Summary/Keyword: Depth camera

Search Result 716, Processing Time 0.03 seconds

Presentation Method Using Depth Information (깊이 정보를 이용한 프레젠테이션 방법)

  • Kim, Ho-Seung;Kwon, Soon-Kak
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.409-415
    • /
    • 2013
  • Recently, various equipments have been developed for convenience of presentations. Presentation equipments added the keyboard and mouse functions to laser pointer and devices have become main method. However these devices have demerits of limited action and a few events. In this paper, we propose a method which increases the degrees of freedom of presentation as the control of the hand by using a depth camera. The proposed method recognizes the horizontal and vertical positions of hand pointer and the distance between hand and camera from both depth and RGB cameras, then performs a presentation event as the location and pattern that the hand touches a screen. The simulation results show that a camera is fixed on left side of the screen, and nine presentation events is correctly performed.

3D Reconstruction Using a Single Camera (단일 카메라를 이용한 3차원 공간 정보 생성)

  • Kwon, Oh-Young;Seo, Kyoung-Taek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2943-2948
    • /
    • 2015
  • Run 3D reconstruction using a single camera, based on the information, we are advancing research on driving assistance apparatus or can be informed how to pass the obstacle existing ahead the driver. As a result depth information falls but it is possible to provide information that can pass through an obstacle on the straight. For 3D reconstruction by measuring the internal parameters, it calculates the Fundamental matrix and matching to find the feature points obtained by executing the triangulation on the basis of this. When the through experiments try to confirm the results, the depth information is present error information in the X and Y axes which can determine whether or not to pass through an obstacle has reliability.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Realtime Implementation Method for Perspective Distortion Correction (원근 왜곡 보정의 실시간 구현 방법)

  • Lee, Dong-Seok;Kim, Nam-Gyu;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.606-613
    • /
    • 2017
  • When the planar area is captured by the depth camera, the shape of the plane in the captured image has perspective projection distortion according to the position of the camera. We can correct the distorted image by the depth information in the plane in the captured area. Previous depth information based perspective distortion correction methods fail to satisfy the real-time property due to a large amount of computation. In this paper, we propose the method of applying the conversion table selectively by measuring the motion of the plane and performing the correction process by parallel processing for correcting perspective projection distortion. By appling the proposed method, the system for correcting perspective projection distortion correct the distorted image, whose resolution is 640x480, as 22.52ms per frame, so the proposed system satisfies the real-time property.

Depth error calibration of maladjusted stereo cameras for translation of instrumented image information in dynamic objects (동영상 정보의 계측정보 전송을 위한 비선형 스테레오 카메라의 오차 보정)

  • Kim, Jong-Man;Kim, Yeong-Min;Hwang, Jong-Sun;Lim, Byung-Hyun
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.05b
    • /
    • pp.109-114
    • /
    • 2003
  • Depth error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount. Such error compensation effect with the calibrated pixel distance parameter is demonstrated with various experimental results.

  • PDF

Surface Rendering using Stereo Images (스테레오 영상을 이용한 Surface Rendering)

  • Lee, S.J.;Yoon, S.W.;Cho, Y.B.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2818-2820
    • /
    • 2001
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

Solving the Correspondence Problem by Multiple Stereo Image and Error Analysis of Computed Depth (다중 스테레오영상을 이용한 대응문제의 해결과 거리오차의 해석)

  • 이재웅;이진우;박광일
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.19 no.6
    • /
    • pp.1431-1438
    • /
    • 1995
  • In this paper, we present a multiple-view stereo matching method in case of moving in the direction of optical axis with stereo camera. Also we analyze the obtainable depth precision to show that multiple-view stereo increases the virtual baseline with single-view stereo. This method decides candidate points for correspondence in each image pair and then search for the correct combinations of correspondences among them using the geometrical consistency they must satisfy. Adantages of this method are capability in increasing the accuracy in matching by using the multiple stereo images and less computation due to local processing. This method computes 3-D depth by averaging the depth obtained in each multiple-view stereo. We show that the resulting depth has more precision than depth obtainable by each independent stereo when the position of image feature is uncertain due to image noise. This paper first defines a multipleview stereo agorithm in case of moving in the direction of optical axis with stereo camera and analyze the obtainable precision of computed depth. Then we represent the effect of removing the incorrect matching candidate and precision enhancement with experimental result.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.