• Title/Summary/Keyword: Depth/Color Information

Search Result 246, Processing Time 0.024 seconds

Hand Segmentation Using Depth Information and Adaptive Threshold by Histogram Analysis with color Clustering

  • Fayya, Rabia;Rhee, Eun Joo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.547-555
    • /
    • 2014
  • This paper presents a method for hand segmentation using depth information, and adaptive threshold by means of histogram analysis and color clustering in HSV color model. We consider hand area as a nearer object to the camera than background on depth information. And the threshold of hand color is adaptively determined by clustering using the matching of color values on the input image with one of the regions of hue histogram. Experimental results demonstrate 95% accuracy rate. Thus, we confirmed that the proposed method is effective for hand segmentation in variations of hand color, scale, rotation, pose, different lightning conditions and any colored background.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

A Recognition Method for Moving Objects Using Depth and Color Information (깊이와 색상 정보를 이용한 움직임 영역의 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.681-688
    • /
    • 2016
  • In the intelligent video surveillance, recognizing the moving objects is important issue. However, the conventional moving object recognition methods have some problems, that is, the influence of light, the distinguishing between similar colors, and so on. The recognition methods for the moving objects using depth information have been also studied, but these methods have limit of accuracy because the depth camera cannot measure the depth value accurately. In this paper, we propose a recognition method for the moving objects by using both the depth and the color information. The depth information is used for extracting areas of moving object and then the color information for correcting the extracted areas. Through tests with typical videos including moving objects, we confirmed that the proposed method could extract areas of moving objects more accurately than a method using only one of two information. The proposed method can be not only used in CCTV field, but also used in other fields of recognizing moving objects.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Face Detection Algorithm using Kinect-based Skin Color and Depth Information for Multiple Faces Detection (Kinect 디바이스에서 피부색과 깊이 정보를 융합한 여러 명의 얼굴 검출 알고리즘)

  • Yun, Young-Ji;Chien, Sung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.137-144
    • /
    • 2017
  • Face detection is still a challenging task under severe face pose variations in complex background. This paper proposes an effective algorithm which can detect single or multiple faces based on skin color detection and depth information. We introduce Gaussian mixture model(GMM) for skin color detection in a color image. The depth information is from three dimensional depth sensor of Kinect V2 device, and is useful in segmenting a human body from the background. Then, a labeling process successfully removes non-face region using several features. Experimental results show that the proposed face detection algorithm can provide robust detection performance even under variable conditions and complex background.

Performance Improvement of Camshift Tracking Algorithm Using Depth Information (Depth 정보를 이용한 CamShift 추적 알고리즘의 성능 개선)

  • Joo, Seong-UK;Choi, Han-Go
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.68-75
    • /
    • 2017
  • This study deals with a color-based tracking method of a moving object effectively in case that the color of the moving object is same as or similar to that of background. The CamShift algorithm, which is the representative color-based tracking method, shows unstable tracking when the color of moving objects exists in the background. In order to overcome the drawback, this paper proposes the CamShift algorithm merged with depth information of the object. Depth information can be obtained from Kinect device which measures the distance information of all pixels in an image. Experimental result shows that the proposed tracking method, the Camshift merged with depth information of the tracking object, makes up for the unstable tracking of the existing CamShift algorithm and also shows improved tracking performance in comparison with only CamShift algorithm.

  • PDF

Human Skin Region Detection Utilizing Depth Information (깊이 정보를 활용한 사람의 피부영역 검출)

  • Jang, Seok-Woo;Park, Young-Jae;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.6
    • /
    • pp.29-36
    • /
    • 2012
  • In this paper, we suggest a new method of detecting human skin-color regions from three-dimensional static or dynamic stereoscopic images by effectively integrating depth and color features. The suggested method first extracts depth information that represents the distance between a camera and an object from input left and right stereoscopic images through a stereo matching technique. It then performs labeling for pixels with similar depth features and determines the labeled regions having human skin color as actual skin color regions. Our experimental results show that the suggested skin region extraction method outperforms existing skin detection methods in terms of skin-color region extraction accuracy.

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.