• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.028 seconds

Depth Image Poselets via Body Part-based Pose and Gesture Recognition (신체 부분 포즈를 이용한 깊이 영상 포즈렛과 제스처 인식)

  • Park, Jae Wan;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.5 no.2
    • /
    • pp.15-23
    • /
    • 2016
  • In this paper we propose the depth-poselets using body-part-poses and also propose the method to recognize the gesture. Since the gestures are composed of sequential poses, in order to recognize a gesture, it should emphasize to obtain the time series pose. Because of distortion and high degree of freedom, it is difficult to recognize pose correctly. So, in this paper we used partial pose for obtaining a feature of the pose correctly without full-body-pose. In this paper, we define the 16 gestures, a depth image using a learning image was generated based on the defined gestures. The depth poselets that were proposed in this paper consists of principal three-dimensional coordinates of the depth image and its depth image of the body part. In the training process after receiving the input defined gesture by using a depth camera in order to train the gesture, the depth poselets were generated by obtaining 3D joint coordinates. And part-gesture HMM were constructed using the depth poselets. In the testing process after receiving the input test image by using a depth camera in order to test, it extracts foreground and extracts the body part of the input image by comparing depth poselets. And we check part gestures for recognizing gesture by using result of applying HMM. We can recognize the gestures efficiently by using HMM, and the recognition rates could be confirmed about 89%.

A Study on Intuitive IoT Interface System using 3D Depth Camera (3D 깊이 카메라를 활용한 직관적인 사물인터넷 인터페이스 시스템에 관한 연구)

  • Park, Jongsub;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.137-152
    • /
    • 2017
  • The decline in the price of IT devices and the development of the Internet have created a new field called Internet of Things (IoT). IoT, which creates new services by connecting all the objects that are in everyday life to the Internet, is pioneering new forms of business that have not been seen before in combination with Big Data. The prospect of IoT can be said to be unlimited in its utilization. In addition, studies of standardization organizations for smooth connection of these IoT devices are also active. However, there is a part of this study that we overlook. In order to control IoT equipment or acquire information, it is necessary to separately develop interworking issues (IP address, Wi-Fi, Bluetooth, NFC, etc.) and related application software or apps. In order to solve these problems, existing research methods have been conducted on augmented reality using GPS or markers. However, there is a disadvantage in that a separate marker is required and the marker is recognized only in the vicinity. In addition, in the case of a study using a GPS address using a 2D-based camera, it was difficult to implement an active interface because the distance to the target device could not be recognized. In this study, we use 3D Depth recognition camera to be installed on smartphone and calculate the space coordinates automatically by linking the distance measurement and the sensor information of the mobile phone without a separate marker. Coordination inquiry finds equipment of IoT and enables information acquisition and control of corresponding IoT equipment. Therefore, from the user's point of view, it is possible to reduce the burden on the problem of interworking of the IoT equipment and the installation of the app. Furthermore, if this technology is used in the field of public services and smart glasses, it will reduce duplication of investment in software development and increase in public services.

The Development of Multi-view point Image Interpolation Method Using Real-image

  • Yang, Kwang-Won;Park, Young-Bin;Huh, Kyung-Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.129.1-129
    • /
    • 2001
  • In this paper, we present an approach for matching images from finding interesting points and applying new image interpolation algorithm. New algorithms are developed that automatically align the input images match them and reconstruct 3-D surfaces. The interpolation algorithm is designed to cope with simple shapes. The proposed image interpolation algorithm generate a rotation image about vertical axes by an any angle from 4 base images. Each base image that was obtained from CCD camera has an angle difference of 90$^{\circ}$ The proposed image interpolation algorithm use the geometric analysis of image and depth information.

  • PDF

3D Display Method for Moving Viewers (움직이는 관찰자용 3차원 디스플레이 방법)

  • Heo, Gyeong-Mu;Kim, Myeong-Sin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.4
    • /
    • pp.37-45
    • /
    • 2000
  • In this paper we suggest a method of detecting the two eyes position of moving viewer by using images obtained through a color CCD camera, and also a method of rendering view-dependent 3D image which consists of depth estimation, image-based 3D object modeling and stereoscopic display process. Through the experiment of applying the suggested methods, we could find the accurate two-eyes position with the success rate of 97.5% within the processing time of 0.39 second using personal computer, and display the view-dependent 3D image using Fl6 flight model. And through the similarity measurement of stereo image rendered at z-buffer by Open Inventor and captured by stereo camera using robot, we could find that view-dependent 3D picture obtained by our proposed method is optimal to viewer.

  • PDF

Three-Dimensional Automatic Target Recognition System Based on Optical Integral Imaging Reconstruction

  • Lee, Min-Chul;Inoue, Kotaro;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.14 no.1
    • /
    • pp.51-56
    • /
    • 2016
  • In this paper, we present a three-dimensional (3-D) automatic target recognition system based on optical integral imaging reconstruction. In integral imaging, elemental images of the reference and target 3-D objects are obtained through a lenslet array or a camera array. Then, reconstructed 3-D images at various reconstruction depths can be optically generated on the output plane by back-projecting these elemental images onto a display panel. 3-D automatic target recognition can be implemented using computational integral imaging reconstruction and digital nonlinear correlation filters. However, these methods require non-trivial computation time for reconstruction and recognition. Instead, we implement 3-D automatic target recognition using optical cross-correlation between the reconstructed 3-D reference and target images at the same reconstruction depth. Our method depends on an all-optical structure to realize a real-time 3-D automatic target recognition system. In addition, we use a nonlinear correlation filter to improve recognition performance. To prove our proposed method, we carry out the optical experiments and report recognition results.

3D Particle Image Detection by Using Color Encoded Illumination System

  • Kawahashi M.;Hirahara H.
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2001.12a
    • /
    • pp.100-107
    • /
    • 2001
  • A simple new technique of particle depth position measurement, which can be applied for three-dimensional velocity measurement of fluid flows, is proposed. Two color illumination system that intensity is encoded as a function of z-coordinate is introduced. A calibration procedure is described and a profile of small sphere is detected by using the present method as preliminary test. Then, this method is applied to three-dimensional velocity field measurement of simple flow fields seeded with tracer particles. The motion of the particles is recorded by color 3CCD camera. The particle position in the image plane is read directly from the recorded image and the depth of each particle is measured by calculation of the intensity ratio of encoded two color illumination. Therefore three-dimensional velocity components are reconstructed. Although the result includes to some extent error, the feasibility of the present technique for three-dimensional velocity measurement was confirmed.

  • PDF

Distance measurement system compensated parameters for extraction of 3D distance (원거리 물체의 3차원거리 측정시의 파라미터 보정된 거리측정시스템)

  • Kim, Jeong-Man;Kim, Young-Min;Kim, Won-Sup;Hwang, Jong-Sun
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2005.07a
    • /
    • pp.605-606
    • /
    • 2005
  • Depth error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount. Such error compensation effect with the calibrated pixel distance parameter is demonstrated with various experimental results.

  • PDF

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

A New Depth and Disparity Visualization Algorithm for Stereoscopic Camera Rig

  • Ramesh, Rohit;Shin, Heung-Sub;Jeong, Shin-Il;Chung, Wan-Young
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.6
    • /
    • pp.645-650
    • /
    • 2010
  • In this paper, we present the effect of binocular cues which plays crucial role for the visualization of a stereoscopic or 3D image. This study is useful in extracting depth and disparity information by image processing technique. A linear relation between the object distance and the image distance is presented to discuss the cause of cybersickness. In the experimental results, three dimensional view of the depth map between the 2D images is shown. A median filter is used to reduce the noises available in the disparity map image. After the median filter, two filter algorithms such as 'Gabor' filter and 'Canny' filter are tested for disparity visualization between two images. The 'Gabor' filter is to estimate the disparity by texture extraction and discrimination methods of the two images, and the 'Canny' filter is used to visualize the disparity by edge detection of the two color images obtained from stereoscopic cameras. The 'Canny' filter is better choice for estimating the disparity rather than the 'Gabor' filter because the 'Canny' filter is much more efficient than 'Gabor' filter in terms of detecting the edges. 'Canny' filter changes the color images directly into color edges without converting them into the grayscale. As a result, more clear edges of the stereo images as compared to the edge detection by 'Gabor' filter can be obtained. Since the main goal of the research is to estimate the horizontal disparity of all possible regions or edges of the images, thus the 'Canny' filter is proposed for decipherable visualization of the disparity.

Real-time Gaussian Hole-Filling Algorithm using Reverse-Depth Image (반전된 Depth 영상을 이용한 실시간 Gaussian Hole-Filling Algorithm)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.7
    • /
    • pp.53-65
    • /
    • 2012
  • Existing method of creating Stereoscopy image, creates viewpoint image from the left and right by shooting one object with 2 lens in certain distance. However, in case of 3-D TV using Stereoscopy camera, the necessity to transmit 2 viewpoint images from the left and right simultaneously, increases the amount of bandwidth. Various and more effective alternatives are under discussion. Among the alternatives, DIBR(Depth Image Based Rendering) creates viewpoint images from the left and right using one image and its Depth information, thus decreasing the amount of transmitted bandwidth. For this reason, there have been various studies on Algorithm to create DIBR Image in existing Static Scene. In this paper, I would like to suggest Gaussian Hole-filling solution, which utilizes reverse-depth image to fill the hole naturally, while minimizing distortion of background. In addition, we have analyzed the effectiveness of each Algorithm by comparing and calculating its functions.