• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.027 seconds

A Shadow Mapping Technique Separating Static and Dynamic Objects in Games using Multiple Render Targets (다중 렌더 타겟을 사용하여 정적 및 동적 오브젝트를 분리한 게임용 그림자 매핑 기법)

  • Lee, Dongryul;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.99-108
    • /
    • 2015
  • To identify the location of the object and improve the realism in 3D game, shadow mapping is widely used to compute the depth values of vertices in view of the light position. Since the depth value of the shadow map is calculated by the world coordinate, the depth values of the static object don't need to be updated. In this paper, (1) in order to improve the rendering speed, using multiple render targets the depth values of static objects stored only once are separated from those of dynamic objects stored each time. And (2) in order to improve the shadow quality in the quarter view 3D game, the position of the light is located close to dynamic objects traveled along the camera each time. The effectiveness of the proposed method is verified by the experiments according to the different static and dynamics object configuration in 3D game.

A System for Measuring 3D Human Bodies Using the Multiple 2D Images (다중 2D 영상을 이용한 3D 인체 계측 시스템)

  • 김창우;최창석;김효숙;강인애;전준현
    • Journal of the Korean Society of Costume
    • /
    • v.53 no.5
    • /
    • pp.1-12
    • /
    • 2003
  • This paper proposes a system for measuring the 3D human bodies using the multiple 2D images. The system establishes the multiple image input circumstance from the digital camera for image measurement. The algorithm considering perspective projection leads us to estimate the 3D human bodies from the multiple 2D images such as frontal. side and rear views. The results of the image measurement is compared those of the direct measurement and the 3D scanner for the total 40 items (12 heights, 15 widths and 13 depths). Three persons measure the 40 items using the three measurement methods. In comparison of the results obtained among the measurement methods and the persons, the results between the image measurement and the 3D scanner are very similar. However, the errors for the direct measurement are relatively larger than those between the image measurement and the 3D scanner. For example, the maximum errors between the image measurement and the 3D scanner are 0.41cm in height, 0.39cm in width and 0.95cm in depth. The errors are acceptable in body measurement. Performance of the image measurement is superior to the direct. because the algorithm estimates the 3D positions using the perspective projection. In above comparison, the image measurement is expected as a new method for measuring the 3D body, since it has the various advantages of the direct measurement and 3D scanner in performance for measurement as well as in the devices, cost, Portability and man power.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

MultiView-Based Hand Posture Recognition Method Based on Point Cloud

  • Xu, Wenkai;Lee, Ick-Soo;Lee, Suk-Kwan;Lu, Bo;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2585-2598
    • /
    • 2015
  • Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera. In this paper, a robust hand posture recognition approach based on 3D point cloud from two RGB-D sensors (Kinect) is proposed to make maximum use of 3D information from depth map. Through noise reduction and registering two point sets obtained satisfactory from two views as we designed, a multi-viewed hand posture point cloud with most 3D information can be acquired. Moreover, we utilize the accurate reconstruction and classify each point cloud by directly matching the normalized point set with the templates of different classes from dataset, which can reduce the training time and calculation. Experimental results based on posture dataset captured by Kinect sensors (from digit 1 to 10) demonstrate the effectiveness of the proposed method.

HEVC Encoder Optimization using Depth Information (깊이정보를 이용한 HEVC의 인코더 고속화 방법)

  • Lee, Yoon Jin;Bae, Dong In;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.640-655
    • /
    • 2014
  • Many of today's video systems have additional depth camera to provide extra features such as 3D support. Thanks to these changes made in multimedia system, it is now much easier to obtain depth information of the video. Depth information can be used in various areas such as object classification, background area recognition, and so on. With depth information, we can achieve even higher coding efficiency compared to only using conventional method. Thus, in this paper, we propose the 2D video coding algorithm which uses depth information on top of the next generation 2D video codec HEVC. Background area can be recognized with depth information and by performing HEVC with it, coding complexity can be reduced. If current CU is background area, we propose the following three methods, 1) Earlier stop split structure of CU with PU SKIP mode, 2) Limiting split structure of CU with CU information in temporal position, 3) Limiting the range of motion searching. We implement our proposal using HEVC HM 12.0 reference software. With these methods results shows that encoding complexity is reduced more than 40% with only 0.5% BD-Bitrate loss. Especially, in case of video acquired through the Kinect developed by Microsoft Corp., encoding complexity is reduced by max 53% without a loss of quality. So, it is expected that these techniques can apply real-time online communication, mobile or handheld video service and so on.

Development of a Robot arm capable of recognizing 3-D object using stereo vision

  • Kim, Sungjin;Park, Seungjun;Park, Hongphyo;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.128.6-128
    • /
    • 2001
  • In this paper, we present a methodology of sensing and control for a robot system designed to be capable of grasping an object and moving it to target point Stereo vision system is employed to determine to depth map which represents the distance from the camera. In stereo vision system we have used a center-referenced projection to represent the discrete match space for stereo correspondence. This center-referenced disparity space contains new occlusion points in addition to the match points which we exploit to create a concise representation of correspondence an occlusion. And from the depth map we find the target object´s pose and position in 3-D space. To find the target object´s pose and position, we use the method of the model-based recognition.

  • PDF

A Relative Depth Estimation Algorithm Using Focus Measure (초점정보를 이용한 패턴간의 상대적 깊이 추정알고리즘 개발)

  • Jeong, Ji-Seok;Lee, Dae-Jong;Shin, Yong-Nyuo;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.527-532
    • /
    • 2013
  • Depth estimation is an essential factor for robot vision, 3D scene modeling, and motion control. The depth estimation method is based on focusing values calculated in a series of images by a single camera at different distance between lens and object. In this paper, we proposed a relative depth estimation method using focus measure. The proposed method is implemented by focus value calculated for each image obtained at different lens position and then depth is finally estimated by considering relative distance of two patterns. We performed various experiments on the effective focus measures for depth estimation by using various patterns and their usefulness.

Image Analysis for Surveillance Camera Based on 3D Depth Map (3차원 깊이 정보 기반의 감시카메라 영상 분석)

  • Lee, Subin;Seo, Yongduek
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.286-289
    • /
    • 2012
  • 본 논문은 3차원 깊이 정보를 이용하여 감시카메라에서 움직이는 사람을 검출하고 추적하는 방법을 제안한다. 제안하는 방법은 GMM(Gaussian mixture model)을 이용하여 배경과 움직이는 사람을 분리한 후, 분리된 영역을 CCL(connected-component labeling)을 통하여 각각 블랍(blob) 단위로 나누고 그 블랍을 추적한다. 그 중 블랍 단위로 나누는 데 있어 두 블랍이 합쳐진 경우, 3차원 깊이 정보를 이용하여 두 블랍을 분리하는 방법을 제안한다. 실험을 통하여 제안하는 방법의 결과를 보인다.

  • PDF

3D Depth Estimation by Using a Single Smart Phone Camera (단일 스마트폰 카메라를 이용한 3D 거리 추정 방법)

  • Bae, Chul Kyun;Ko, Young Min;Kim, Seung Gi;Kim, Dae Jin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.240-243
    • /
    • 2018
  • 최근 VR(Virtual Reality)와 AR(Augmented Reality)의 발전에 따라 영상 또는 이미지에서 카메라와 물체 사이의 거리를 추정하는 기술에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 카메라와 물체 사이의 거리 추정 방법 중에서 단일 카메라를 이용하여 촬영한 이미지의 흐림 정도를 분석하여 3D 거리를 추정하는 알고리즘을 연구한다. 특히 고가의 렌즈가 장착된 DSLR 카메라가 아닌 스마트폰 카메라 이미지에서 DFD를 이용한 거리 추정 방법 중 1개의 이미지를 이용한 3D 거리 추정 방법과 초점이 서로 다른 2개의 이미지를 결합하여 3D 거리를 추정하는 방법을 연구하고 최적회된 피사체 범위에 대해 연구하였다. 한 개의 이미지를 이용한 거리 추정에서는 카메라의 초점 거리를 200 mm로 설정할 때, 두 개의 이미지를 이용한 거리 추정에서는 두 이미지의 초점 거리를 각각 150 mm, 250 mm로 설정했을 때 가장 넓은 거리 추정 범위를 갖는다. 또한, 두 거리 추정 방법 모두 초점 거리가 가까울수록 가까운 물체의 거리 추정에 효율적인 것으로 나타났다.

  • PDF

Up-Sampling Method of Depth Map Using Weighted Joint Bilateral Filter (가중치 결합 양방향 필터를 이용한 깊이 지도의 업샘플링 방법)

  • Oh, Dong-ryul;Oh, Byung Tae;Shin, Jitae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.6
    • /
    • pp.1175-1184
    • /
    • 2015
  • A depth map is an image which contains 3D distance information. Generally, it is difficult to acquire a high resolution (HD), noise-removed, good quality depth map directly from the camera. Therefore, many researches have been focused on acquisition of the high resolution and the good quality depth map by up-sampling and pre/post image processing of the low resolution depth map. However, many researches are lack of effective up-sampling for the edge region which has huge impact on image perceptual-quality. In this paper, we propose an up-sampling method, based on joint bilateral filter, which improves up-sampling of the edge region and visual quality of synthetic images by adopting different weights for the edge parts that is sensitive to human perception characteristics. The proposed method has gains in terms of PSNR and subjective video quality compared to previous researches.