• 제목/요약/키워드: Depth Feature

검색결과 426건 처리시간 0.036초

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

융합형 필터를 이용한 깊이 영상 기반 특징점 검출 기법 (Depth Image Based Feature Detection Method Using Hybrid Filter)

  • 전용태;이현;최재성
    • 대한임베디드공학회논문지
    • /
    • 제12권6호
    • /
    • pp.395-403
    • /
    • 2017
  • Image processing for object detection and identification has been studied for supply chain management application with various approaches. Among them, feature pointed detection algorithm is used to track an object or to recognize a position in automated supply chain systems and a depth image based feature point detection is recently highlighted in the application. The result of feature point detection is easily influenced by image noise. Also, the depth image has noise itself and it also affects to the accuracy of the detection results. In order to solve these problems, we propose a novel hybrid filtering mechanism for depth image based feature point detection, it shows better performance compared with conventional hybrid filtering mechanism.

인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출 (Facial Feature Localization from 3D Face Image using Adjacent Depth Differences)

  • 김익동;심재창
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권5호
    • /
    • pp.617-624
    • /
    • 2004
  • 본 연구에서는 3차원 얼굴 데이타에서 인접 부위의 깊이 차를 이용하여 얼굴의 주요 특징을 추출해 내는 방법을 제안한다. 인간은 사물의 특정 부분의 깊이 정보를 인식하는데 있어서 인접 부위와의 깊이 정보를 비교하고, 이를 바탕으로 깊이 값에 의한 대조가 두드러진 정도에 따라 상대적으로 깊이가 깊고 얕음을 지각하게 된다. 이런 인식 원리를 얼굴의 특징 추출에 적용하여 간단한 연산 과정을 통해 신뢰성 있고, 빠른 얼굴의 특징 추출이 가능하다. 인접 부위의 깊이 차는 수평방향과 수직방향으로 각각 일정 거리를 둔 지점에서의 두 지점간의 깊이 차로 생성된다. 생성된 수평, 수직 방향으로 인접 깊이 차와 입력된 3차원 얼굴 영상을 분석하여 3차원 얼굴 영상에서 가장 주된 특징이 되는 코 영역을 추출하였다.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

3D Face Recognition using Local Depth Information

  • 이영학;심재창;이태홍
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제29권11호
    • /
    • pp.818-825
    • /
    • 2002
  • 얼굴의 깊이 정보는 얼굴 인식에서 가장 중요한 요소이다. 3차원 얼굴 영상은 깊이 정보를 잘 나타내므로 얼굴의 깊이 값을 비교하는데 아주 유용하다. 얼굴 전체에 대한 처리는 많은 계산량과 데이터 량을 포함해야 하는 문제점이 있다. 따라서 본 논문에서는 얼굴의 국부적인 영역들에 대한 3차원 깊이 값을 이용하여 인식하였다. 3D 레이저 스캐너로 입력된 3차원 얼굴 영상으로부터 어떤 깊이에 있는 등고선 영역을 추출한 후, 이를 영역별로 취하면 국부적인 얼굴 깊이에 대한 특징을 잘 반영하게 된다. 얼굴의 가장 중심인 코를 기준점으로 깊이 영역에 대한 등고선 영역을 추출하며, 얼굴의 깊이를 고려한 국부적 깊이 정보를 다중 특징 벡터를 이용하여 얼굴을 인식한다. 다중 특징 벡터는 벡터 수가 적으면서 얼굴의 지역적 깊이 특성을 잘 나타내므로 간단한 방법으로 높은 인식률을 얻을 수 있었다.

3차원 특징볼륨을 이용한 깊이영상 생성 모델 (Depth Map Estimation Model Using 3D Feature Volume)

  • 신수연;김동명;서재원
    • 한국콘텐츠학회논문지
    • /
    • 제18권11호
    • /
    • pp.447-454
    • /
    • 2018
  • 본 논문은 컨볼루션 신경망으로 이루어진 학습 모델을 통해 스테레오 영상의 깊이영상 생성 알고리즘을 제안한다. 제안하는 알고리즘은 좌, 우 시차 영상을 입력으로 받아 각 시차영상의 주요 특징을 추출하는 특징 추출부와 추출된 특징을 이용하여 시차 정보를 학습하는 깊이 학습부로 구성된다. 우선 특징 추출부는 2D CNN 계층들로 이루어진 익셉션 모듈(xception module) 및 ASPP 모듈(atrous spatial pyramid pooling) module을 통해 각각의 시차영상에 대한 특징맵을 추출한다. 그 후 각 시차에 대한 특징 맵을 시차에 따라 3차원 형태로 쌓아 3D CNN을 통해 깊이 추정 가중치를 학습하는 깊이 학습부를 거친 후 깊이 영상을 추정한다. 제안하는 알고리즘은 객체 영역에 대해 기존의 다른 학습 알고리즘들 보다 정확한 깊이를 추정하였다.

주의 기반 시각정보처리체계 시스템 구현을 위한 스테레오 영상의 변위도를 이용한 새로운 특징맵 구성 및 통합 방법 (A Novel Feature Map Generation and Integration Method for Attention Based Visual Information Processing System using Disparity of a Stereo Pair of Images)

  • 박민철;최경주
    • 정보처리학회논문지B
    • /
    • 제17B권1호
    • /
    • pp.55-62
    • /
    • 2010
  • 인간의 시각 주의 시스템은 주어진 시각장면을 모두 다 처리하기보다는 주의가 집중되는 일정한 작은 영역들을 순간적으로 선택하여 그 부분만을 순차적으로 처리함으로써 복잡한 시각장면을 단순화시켜 쉽게 분석할 수 있는 능력을 가지고 있다. 본 논문에서는 주의 기반 시각정보 처리체계 시스템 구현을 위한 새로운 특징맵 구성 및 통합 방법을 제안한다. 제안하는 시스템에서는 시각특징으로서 색상, 명도, 방위, 형태 외에 2개의 스테레오 영상 쌍으로부터 얻을 수 있는 깊이 정보를 추가하여 사용하였다. 실험결과를 통해 깊이 정보를 사용함으로써 주의 영역의 정탐지율이 개선됨을 확인하였다.

Depth-hybrid speeded-up robust features (DH-SURF) for real-time RGB-D SLAM

  • Lee, Donghwa;Kim, Hyungjin;Jung, Sungwook;Myung, Hyun
    • Advances in robotics research
    • /
    • 제2권1호
    • /
    • pp.33-44
    • /
    • 2018
  • This paper presents a novel feature detection algorithm called depth-hybrid speeded-up robust features (DH-SURF) augmented by depth information in the speeded-up robust features (SURF) algorithm. In the keypoint detection part of classical SURF, the standard deviation of the Gaussian kernel is varied for its scale-invariance property, resulting in increased computational complexity. We propose a keypoint detection method with less variation of the standard deviation by using depth data from a red-green-blue depth (RGB-D) sensor. Our approach maintains a scale-invariance property while reducing computation time. An RGB-D simultaneous localization and mapping (SLAM) system uses a feature extraction method and depth data concurrently; thus, the system is well-suited for showing the performance of the DH-SURF method. DH-SURF was implemented on a central processing unit (CPU) and a graphics processing unit (GPU), respectively, and was validated through the real-time RGB-D SLAM.

Distance Measurement Using the Kinect Sensor with Neuro-image Processing

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.379-383
    • /
    • 2015
  • This paper presents an approach to detect object distance with the use of the recently developed low-cost Kinect sensor. The technique is based on Kinect color depth-image processing and can be used to design various computer-vision applications, such as object recognition, video surveillance, and autonomous path finding. The proposed technique uses keypoint feature detection in the Kinect depth image and advantages of depth pixels to directly obtain the feature distance in the depth images. This highly reduces the computational overhead and obtains the pixel distance in the Kinect captured images.