• 제목/요약/키워드: feature scale

검색결과 963건 처리시간 0.022초

스케일-스페이스 필터링을 통한 특징점 추출 및 질감도 비교를 적용한 추적 알고리즘 (Feature point extraction using scale-space filtering and Tracking algorithm based on comparing texturedness similarity)

  • 박용희;권오석
    • 인터넷정보학회논문지
    • /
    • 제6권5호
    • /
    • pp.85-95
    • /
    • 2005
  • 본 논문에서는 시퀀스 이미지에서 스케일-스페이스 필터링을 통한 특징점 추출과 질감도(texturedness) 비교를 적용한 특징점 추적 알고리즘을 제안한다. 특징점을 추출하기 위해서 정의된 오퍼레이터를 이용하는데, 이때 설정되는 스케일 파라미터는 특징점 선정 및 위치 설정에 영향을 주게 되며, 특징점 추적 알고리즘의 성능과도 관계가 있다. 본 논문에서는 스케일-스페이스 필터링을 통한 특징점 선정 및 위치 설정 방안을 제시한다. 영상 시퀀스에서, 카메라 시점 변화 또는 물체의 움직임은 특징점 추적 윈도우내에 아핀 변환을 가지게 하는데, 대응점 추적을 위한 유사도 측정에 어려움을 준다. 본 논문에서는 Shi-Tomasi-Kanade 추적 알고리즘에 기반하여, 아핀 변환에 비교적 견실한 특징점의 질감도 비교를 수행하는 최적 대응점 탐색 방법을 제안한다.

  • PDF

Size, Scale and Rotation Invariant Proposed Feature vectors for Trademark Recognition

  • Faisal zafa, Muhammad;Mohamad, Dzulkifli
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1420-1423
    • /
    • 2002
  • The classification and recognition of two-dimensional trademark patterns independently of their position, orientation, size and scale by proposing two feature vectors has been discussed. The paper presents experimentation on two feature vectors showing size- invariance and scale-invariance respectively. Both feature vectors are equally invariant to rotation as well. The feature extraction is based on local as well as global statistics of the image. These feature vectors have appealing mathematical simplicity and are versatile. The results so far have shown the best performance of the developed system based on these unique sets of feature. The goal has been achieved by segmenting the image using connected-component (nearest neighbours) algorithm. Second part of this work considers the possibility of using back propagation neural networks (BPN) for the learning and matching tasks, by simply feeding the feature vectosr. The effectiveness of the proposed feature vectors is tested with various trademarks, not used in learning phase.

  • PDF

SIFT 와 SURF 알고리즘의 성능적 비교 분석 (Comparative Analysis of the Performance of SIFT and SURF)

  • 이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.59-64
    • /
    • 2013
  • Accurate and robust image registration is important task in many applications such as image retrieval and computer vision. To perform the image registration, essential required steps are needed in the process: feature detection, extraction, matching, and reconstruction of image. In the process of these function, feature extraction not only plays a key role, but also have a big effect on its performance. There are two representative algorithms for extracting image features, which are scale invariant feature transform (SIFT) and speeded up robust feature (SURF). In this paper, we present and evaluate two methods, focusing on comparative analysis of the performance. Experiments for accurate and robust feature detection are shown on various environments such like scale changes, rotation and affine transformation. Experimental trials revealed that SURF algorithm exhibited a significant result in both extracting feature points and matching time, compared to SIFT method.

Real-time Object Recognition with Pose Initialization for Large-scale Standalone Mobile Augmented Reality

  • Lee, Suwon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.4098-4116
    • /
    • 2020
  • Mobile devices such as smartphones are very attractive targets for augmented reality (AR) services, but their limited resources make it difficult to increase the number of objects to be recognized. When the recognition process is scaled to a large number of objects, it typically requires significant computation time and memory. Therefore, most large-scale mobile AR systems rely on a server to outsource recognition process to a high-performance PC, but this limits the scenarios available in the AR services. As a part of realizing large-scale standalone mobile AR, this paper presents a solution to the problem of accuracy, memory, and speed for large-scale object recognition. To this end, we design our own basic feature and realize spatial locality, selective feature extraction, rough pose estimation, and selective feature matching. Experiments are performed to verify the appropriateness of the proposed method for realizing large-scale standalone mobile AR in terms of efficiency and accuracy.

Mid-level Feature Extraction Method Based Transfer Learning to Small-Scale Dataset of Medical Images with Visualizing Analysis

  • Lee, Dong-Ho;Li, Yan;Shin, Byeong-Seok
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1293-1308
    • /
    • 2020
  • In fine-tuning-based transfer learning, the size of the dataset may affect learning accuracy. When a dataset scale is small, fine-tuning-based transfer-learning methods use high computing costs, similar to a large-scale dataset. We propose a mid-level feature extractor that retrains only the mid-level convolutional layers, resulting in increased efficiency and reduced computing costs. This mid-level feature extractor is likely to provide an effective alternative in training a small-scale medical image dataset. The performance of the mid-level feature extractor is compared with the performance of low- and high-level feature extractors, as well as the fine-tuning method. First, the mid-level feature extractor takes a shorter time to converge than other methods do. Second, it shows good accuracy in validation loss evaluation. Third, it obtains an area under the ROC curve (AUC) of 0.87 in an untrained test dataset that is very different from the training dataset. Fourth, it extracts more clear feature maps about shape and part of the chest in the X-ray than fine-tuning method.

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement

  • Canlin Li;Shun Song;Pengcheng Gao;Wei Huang;Lihua Bi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.980-997
    • /
    • 2024
  • To improve the brightness of images and reveal hidden information in dark areas is the main objective of low-light image enhancement (LLIE). LLIE methods based on deep learning show good performance. However, there are some limitations to these methods, such as the complex network model requires highly configurable environments, and deficient enhancement of edge details leads to blurring of the target content. Single-scale feature extraction results in the insufficient recovery of the hidden content of the enhanced images. This paper proposed an edge detection-based multi-scale feature enhancement network for LLIE (EDMFEN). To reduce the loss of edge details in the enhanced images, an edge extraction module consisting of a Sobel operator is introduced to obtain edge information by computing gradients of images. In addition, a multi-scale feature enhancement module (MSFEM) consisting of multi-scale feature extraction block (MSFEB) and a spatial attention mechanism is proposed to thoroughly recover the hidden content of the enhanced images and obtain richer features. Since the fused features may contain some useless information, the MSFEB is introduced so as to obtain the image features with different perceptual fields. To use the multi-scale features more effectively, a spatial attention mechanism module is used to retain the key features and improve the model performance after fusing multi-scale features. Experimental results on two datasets and five baseline datasets show that EDMFEN has good performance when compared with the stateof-the-art LLIE methods.

자기 위치 결정을 위한 SIFT 기반의 특징 지도 갱신 알고리즘 (An Algorithm of Feature Map Updating for Localization using Scale-Invariant Feature Transform)

  • 이재광;허욱열;김학일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.141-143
    • /
    • 2004
  • This paper presents an algorithm in which a feature map is built and localization of a mobile robot is carried out for indoor environments. The algorithm proposes an approach which extracts scale-invariant features of natural landmarks from a pair of stereo images. The feature map is built using these features and updated by merging new landmarks into the map and removing transient landmarks over time. And the position of the robot in the map is estimated by comparing with the map in a database by means of an Extended Kalman filter. This algorithm is implemented and tested using a Pioneer 2-DXE and preliminary results are presented in this paper.

  • PDF

단일카메라를 사용한 특징점 기반 물체 3차원 윤곽선 구성 (Constructing 3D Outlines of Objects based on Feature Points using Monocular Camera)

  • 박상현;이정욱;백두권
    • 정보처리학회논문지B
    • /
    • 제17B권6호
    • /
    • pp.429-436
    • /
    • 2010
  • 본 논문에서는 단일 카메라로부터 획득한 영상으로부터 물체의 3차원 윤곽선을 구성하는 방법을 제안한다. MOPS(Multi-Scale Oriented Patches) 알고리즘을 이용하여 물체의 대략적인 윤곽선을 검출하고 윤곽선 위에 분포한 특징점의 공간좌표를 획득한다. 동시에 SIFT(Scale Invariant Feature Transform) 알고리즘을 통하여 물체의 윤곽선 내부에 존재하는 특징점 공간좌표를 획득한다. 이러한 정보를 병합하여 물체의 전체 3차원 윤곽선 정보를 구성한다. 본 논문에서 제안하는 방법은 대략적인 물체의 윤곽선만 구성하기 때문에 빠른 계산이 가능하며 SIFT 특징점을 통해 윤곽선 내부 정보를 보완하기 때문에 물체의 자세한 3차원 정보를 얻을 수 있는 장점이 있다.

초음파 영상에서의 특징점 추출 방법 (Methods for Extracting Feature Points from Ultrasound Images)

  • 김성중;유재천
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2020년도 제61차 동계학술대회논문집 28권1호
    • /
    • pp.59-60
    • /
    • 2020
  • 본 논문에서는 특징점 추출 알고리즘 중 SIFT(Scale Invariant Feature Transform)알고리즘을 사용하여 유의미한 특징점을 추출하기 위한 방법을 제안하고자한다. 추출된 특징점을 실제 이미지에 display 해봄으로써 성능을 확인해본다.

  • PDF