• 제목/요약/키워드: Feature map correlation

검색결과 34건 처리시간 0.034초

컨볼루션 신경망의 특징맵을 사용한 객체 추적 (Object Tracking using Feature Map from Convolutional Neural Network)

  • 임수창;김도연
    • 한국멀티미디어학회논문지
    • /
    • 제20권2호
    • /
    • pp.126-133
    • /
    • 2017
  • The conventional hand-crafted features used to track objects have limitations in object representation. Convolutional neural networks, which show good performance results in various areas of computer vision, are emerging as new ways to break through the limitations of feature extraction. CNN extracts the features of the image through layers of multiple layers, and learns the kernel used for feature extraction by itself. In this paper, we use the feature map extracted from the convolution layer of the convolution neural network to create an outline model of the object and use it for tracking. We propose a method to adaptively update the outline model to cope with various environment change factors affecting the tracking performance. The proposed algorithm evaluated the validity test based on the 11 environmental change attributes of the CVPR2013 tracking benchmark and showed excellent results in six attributes.

수치지도 Ver.2.0을 이용한 종이지도제작기법 개발 (Topographic mapping using digital map Ver.2.0)

  • 황창섭;정성혁;함창학;이재기
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2003년도 추계학술발표회 논문집
    • /
    • pp.281-286
    • /
    • 2003
  • Since National Geographic Information System was started, paper maps have been made with computer aided editing of digital map, instead of etching map-size negative film. Automated paper mapping system's necessity is growing more and more, because digital map has changed into Ver.2.0 which include attributes of feature. Therefore, in this study we try to analyze correlation of the digital map feature code and the 1/5,000 topographic map specifications which is necessary for paper mapping automatization using digital map Ver.2.0, and try to develop fundamental modules which will play a core role in automated paper mapping system.

  • PDF

1/5,000 지형도제작을 위한 수치지도 Ver.2.0 자료변환 시스템 개발 (Development of Digital map Ver.2.0 representation conversion system for 1/5,000 Topographic mapping)

  • 황창섭;이재기
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2004년도 춘계학술발표회논문집
    • /
    • pp.321-328
    • /
    • 2004
  • Since National Geographic Information System was started, topographic maps have been made with computer aided editing of digital map, instead of etching map-size negative film. topographic mapping system's necessity is growing more and more, because digital map has changed into Ver.2.0 which include attributes of feature. On the basis of the previous study for analyzing correlation between the digital map feature code and the 1/5,000 topographic map specifications and trying to develop fundamental modules which will play a core role in topographic mapping system, in this study, we apply some 1/5,000 digital maps Ver.2.0 to topographic mapping system have implemented and try to analyze the result.

  • PDF

디지털 항공영상의 도화성과를 이용한 소축척 수치지도 제작 (Small Scale Digital Mapping using Airborne Digital Camera Image Map)

  • 최석근;오유진
    • 한국측량학회지
    • /
    • 제29권2호
    • /
    • pp.141-147
    • /
    • 2011
  • 본 연구는 최근 많이 촬영되고 있는 고해상도 디지털 항공영상자료를 가지고 제작된 대축척 수치지도를 이용하여 소축척 수치지도를 제작하는데 있어서의 문제점 및 효용성을 분석하였다. 이를 위하여 수치지도지형 지물들의 상관성 분석을 수행하였고, 이들 자료를 기초로 축소편집 작업 공정에 따라 자료를 입력, 지형 지물항목정리 및 삭제, 자료편집 및 검수 등을 수행하였다. 그 결과 18개의 불필요한 지형지물을 삭제하였고, 1/5,000 수치지도 정확도에 만족하였으며, 자료크기와 지형지물 수는 증가하였으나, 이는 디지털 항공영상의 표현능력이 우수하여 나타난 것으로 분석되었다. 따라서 디지털 항공영상에 의한 대축척 수치지도를 가지고 소축척 수치지도를 제작하는 것은 표현능력이 우수하여 질 좋은 수치지도 정보 제공이 가능한 것으로 나타났다.

야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법 (Confidence Measure of Depth Map for Outdoor RGB+D Database)

  • 박재광;김선옥;손광훈;민동보
    • 한국멀티미디어학회논문지
    • /
    • 제19권9호
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.

가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합 (Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map)

  • 이호;홍헬렌;신영길
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권7호
    • /
    • pp.612-624
    • /
    • 2005
  • 본 논문에서는 PET-CT 뇌 영상융합을 위해 가우시안 가중치 거리지도를 이용한 표면기반 영상정합을 제안한다. 제안방법은 중요 세 단계로 표면 특징점 추출, 가우시안 가중치 거리지도 생성, 가중치기반 유사도 평가로 구성된다. 첫째, PET 영상과 CT 영상에서 삼차원 역 영역성장법을 이용하여 머리영역을 분할하고 머리 영역과 같이 분할된 잡음 영역을 영역성장법기반 레이블링을 이용한 영역 크기 비교를 통해 제거한 후 선명화 처리 필터를 적용하여 머리 표면 특징점을 추출한다. 둘째, CT 영상에서 추출한 표면 특징점에 가우시안 가중치 거리지도를 생성하여 큰 변위에서도 최적의 위치로 견고하게 수렴하도록 한다. 셋째, 가중치기반 상호상관관계는 PET 영상에서 추출한 표면 특징점과 대응되는 CT 영상의 가우시안 가중치 거리지도를 이용하여 최적 위치를 탐색한다. 본 논문에서는 제안방법의 정확성과 견고성 검사를 위해 인공데이타를 이용하고, 수행시간과 육안평가를 위해 임상데이타를 이용한다. 정확성 검사는 임의로 변환된 인공데이타에 제안방법을 적용한 후 추출된 최적화 변환벡터와의 오차를 제곱근평균제곱오차를 이용하여 평가한다. 견고성 검사는 큰 변위와 잡음을 가지는 인공데이타에서 가중치기반 상호상관관계가 최적의 위치에서 최대를 이루는지를 평가한다 실험 결과 제안한 표면기반 영상정합이 기존 표면기반 영상정합보다 정확하고 견고하게 수렴됨을 알 수 있다.

컨볼루션 특징 맵의 상관관계를 이용한 영상물체추적 (Visual object tracking using inter-frame correlation of convolutional feature maps)

  • 김민지;김성찬
    • 대한임베디드공학회논문지
    • /
    • 제11권4호
    • /
    • pp.219-225
    • /
    • 2016
  • Visual object tracking is one of the key tasks in computer vision. Robust trackers should address challenging issues such as fast motion, deformation, occlusion and so on. In this paper, we therefore propose a visual object tracking method that exploits inter-frame correlations of convolutional feature maps in Convolutional Neural Net (ConvNet). The proposed method predicts the location of a target by considering inter-frame spatial correlation between target location proposals in the present frame and its location in the previous frame. The experimental results show that the proposed algorithm outperforms the state-of-the-art work especially in hard-to-track sequences.

A robust Correlation Filter based tracker with rich representation and a relocation component

  • Jin, Menglei;Liu, Weibin;Xing, Weiwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권10호
    • /
    • pp.5161-5178
    • /
    • 2019
  • Correlation Filter was recently demonstrated to have good characteristics in the field of video object tracking. The advantages of Correlation Filter based trackers are reflected in the high accuracy and robustness it provides while maintaining a high speed. However, there are still some necessary improvements that should be made. First, most trackers cannot handle multi-scale problems. To solve this problem, our algorithm combines position estimation with scale estimation. The difference from the traditional method in regard to the scale estimation is that, the proposed method can track the scale of the object more quickly and effective. Additionally, in the feature extraction module, the feature representation of traditional algorithms is relatively simple, and furthermore, the tracking performance is easily affected in complex scenarios. In this paper, we design a novel and powerful feature that can significantly improve the tracking performance. Finally, traditional trackers often suffer from model drift, which is caused by occlusion and other complex scenarios. We introduce a relocation component to detect object at other locations such as the secondary peak of the response map. It partly alleviates the model drift problem.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.