• Title/Summary/Keyword: depth information

Search Result 4,348, Processing Time 0.041 seconds

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Scalable Coding of Depth Images with Synthesis-Guided Edge Detection

  • Zhao, Lijun;Wang, Anhong;Zeng, Bing;Jin, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4108-4125
    • /
    • 2015
  • This paper presents a scalable coding method for depth images by considering the quality of synthesized images in virtual views. First, we design a new edge detection algorithm that is based on calculating the depth difference between two neighboring pixels within the depth map. By choosing different thresholds, this algorithm generates a scalable bit stream that puts larger depth differences in front, followed by smaller depth differences. A scalable scheme is also designed for coding depth pixels through a layered sampling structure. At the receiver side, the full-resolution depth image is reconstructed from the received bits by solving a partial-differential-equation (PDE). Experimental results show that the proposed method improves the rate-distortion performance of synthesized images at virtual views and achieves better visual quality.

Smoke Detection Based on RGB-Depth Camera in Interior (RGB-Depth 카메라 기반의 실내 연기검출)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • In this paper, an algorithm using RGB-depth camera is proposed to detect smoke in interrior. RGB-depth camera, the Kinect provides RGB color image and depth information. The Kinect sensor consists of an infra-red laser emitter, infra-red camera and an RGB camera. A specific pattern of speckles radiated from the laser source is projected onto the scene. This pattern is captured by the infra-red camera and is analyzed to get depth information. The distance of each speckle of the specific pattern is measured and the depth of object is estimated. As the depth of object is highly changed, the depth of object plain can not be determined by the Kinect. The depth of smoke can not be determined too because the density of smoke is changed with constant frequency and intensity of infra-red image is varied between each pixels. In this paper, a smoke detection algorithm using characteristics of the Kinect is proposed. The region that the depth information is not determined sets the candidate region of smoke. If the intensity of the candidate region of color image is larger than a threshold, the region is confirmed as smoke region. As results of simulations, it is shown that the proposed method is effective to detect smoke in interior.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Performance Improvement of Camshift Tracking Algorithm Using Depth Information (Depth 정보를 이용한 CamShift 추적 알고리즘의 성능 개선)

  • Joo, Seong-UK;Choi, Han-Go
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.68-75
    • /
    • 2017
  • This study deals with a color-based tracking method of a moving object effectively in case that the color of the moving object is same as or similar to that of background. The CamShift algorithm, which is the representative color-based tracking method, shows unstable tracking when the color of moving objects exists in the background. In order to overcome the drawback, this paper proposes the CamShift algorithm merged with depth information of the object. Depth information can be obtained from Kinect device which measures the distance information of all pixels in an image. Experimental result shows that the proposed tracking method, the Camshift merged with depth information of the tracking object, makes up for the unstable tracking of the existing CamShift algorithm and also shows improved tracking performance in comparison with only CamShift algorithm.

  • PDF

Volumetric Visualization using Depth Information of Stereo Images (스테레오 영상에서의 깊이정보를 이용한 3차원 입체화)

  • Lee, S.J.;Kim, J.H.;Lee, J.W.;Ahn, J.S.;Kim, H.S.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.11c
    • /
    • pp.839-841
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

Reconstruction of 3D Virtual Reality Using Depth Information of Stereo Image (스테레오 영상에서의 깊이정보를 이용한 3D 가상현실 구현)

  • Lee, S.J.;Kim, J.H.;Lee, J.W.;Ahn, J.S.;Lee, D.J.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2950-2952
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) method and OpenGL. The final result image is helpful for the understanding of depth information visually.

  • PDF

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

The measurement of p-n junction depth by SEM

  • Hur, Chang-Wu;Lee, Kyu-Chung
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.4
    • /
    • pp.324-327
    • /
    • 2007
  • In this paper, the p-n junction depth with nondestructive method by using scanning electron microscopy (SEM) is determined and conformed. By measuring the critical short circuit current on the p-n junction which induced by electron beam and calculating generation range, the diffusion depth can be obtained. It can be seen that values destructively measured by constant angle lapping and nondestructively by this study almost concur. As this result, it is purposed that diffusion depth of p-n junction can be easily measured by nondestruction. This nondestructive method can be recommended highly to the industrial analysis.

Single Camera Based Robot Localization (단일카메라기반의 로봇 위치추정)

  • Yi, Chong-Ho;Ahn, Chang-Hwan;Park, Chang-Woo
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1173-1174
    • /
    • 2008
  • In this paper, we propose a front-mounted single camera based depth estimation and robot localization method. The advantage of front-mounted camera is reduction of redundancy when the robot move. The robot computes depth information of captured image, moving around. And the robot location is corrected by depth information.

  • PDF