• 제목/요약/키워드: Visual Feature Extraction

검색결과 141건 처리시간 0.026초

Adaptive Processing for Feature Extraction: Application of Two-Dimensional Gabor Function

  • Lee, Dong-Cheon
    • 대한원격탐사학회지
    • /
    • 제17권4호
    • /
    • pp.319-334
    • /
    • 2001
  • Extracting primitives from imagery plays an important task in visual information processing since the primitives provide useful information about characteristics of the objects and patterns. The human visual system utilizes features without difficulty for image interpretation, scene analysis and object recognition. However, to extract and to analyze feature are difficult processing. The ultimate goal of digital image processing is to extract information and reconstruct objects automatically. The objective of this study is to develop robust method to achieve the goal of the image processing. In this study, an adaptive strategy was developed by implementing Gabor filters in order to extract feature information and to segment images. The Gabor filters are conceived as hypothetical structures of the retinal receptive fields in human vision system. Therefore, to develop a method which resembles the performance of human visual perception is possible using the Gabor filters. A method to compute appropriate parameters of the Gabor filters without human visual inspection is proposed. The entire framework is based on the theory of human visual perception. Digital images were used to evaluate the performance of the proposed strategy. The results show that the proposed adaptive approach improves performance of the Gabor filters for feature extraction and segmentation.

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

보로노이-테셀레이션 알고리즘을 이용한 NUI를 위한 비주얼 터치 인식 (Visual Touch Recognition for NUI Using Voronoi-Tessellation Algorithm)

  • 김성관;주영훈
    • 전기학회논문지
    • /
    • 제64권3호
    • /
    • pp.465-472
    • /
    • 2015
  • This paper presents a visual touch recognition for NUI(Natural User Interface) using Voronoi-tessellation algorithm. The proposed algorithms are three parts as follows: hand region extraction, hand feature point extraction, visual-touch recognition. To improve the robustness of hand region extraction, we propose RGB/HSI color model, Canny edge detection algorithm, and use of spatial frequency information. In addition, to improve the accuracy of the recognition of hand feature point extraction, we propose the use of Douglas Peucker algorithm, Also, to recognize the visual touch, we propose the use of the Voronoi-tessellation algorithm. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Visual Feature Extraction Technique for Content-Based Image Retrieval

  • Park, Won-Bae;Song, Young-Jun;Kwon, Heak-Bong;Ahn, Jae-Hyeong
    • 한국멀티미디어학회논문지
    • /
    • 제7권12호
    • /
    • pp.1671-1679
    • /
    • 2004
  • This study has proposed visual-feature extraction methods for each band in wavelet domain with both spatial frequency features and multi resolution features. In addition, it has brought forward similarity measurement method using fuzzy theory and new color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Experiments are performed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

Efficient Content-Based Image Retrieval Methods Using Color and Texture

  • Lee, Sang-Mi;Bae, Hee-Jung;Jung, Sung-Hwan
    • ETRI Journal
    • /
    • 제20권3호
    • /
    • pp.272-283
    • /
    • 1998
  • In this paper, we propose efficient content-based image retrieval methods using the automatic extraction of the low-level visual features as image content. Two new feature extraction methods are presented. The first one os an advanced color feature extraction derived from the modification of Stricker's method. The second one is a texture feature extraction using some DCT coefficients which represent some dominant directions and gray level variations of the image. In the experiment with an image database of 200 natural images, the proposed methods show higher performance than other methods. They can be combined into an efficient hierarchical retrieval method.

  • PDF

객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템 (A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map)

  • 최경주
    • 한국멀티미디어학회논문지
    • /
    • 제18권4호
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

표고 외관 특징점의 자동 추출 및 측정 (Automatic Extraction and Measurement of Visual Features of Mushroom (Lentinus edodes L.))

  • 황헌;이용국
    • 생물환경조절학회지
    • /
    • 제1권1호
    • /
    • pp.37-51
    • /
    • 1992
  • Quantizing and extracting visual features of mushroom(Lentinus edodes L.) are crucial to the sorting and grading automation, the growth state measurement, and the dried performance indexing. A computer image processing system was utilized for the extraction and measurement of visual features of front and back sides of the mushroom. The image processing system is composed of the IBM PC compatible 386DK, ITEX PCVISION Plus frame grabber, B/W CCD camera, VGA color graphic monitor, and image output RGB monitor. In this paper, an automatic thresholding algorithm was developed to yield the segmented binary image representing skin states of the front and back sides. An eight directional Freeman's chain coding was modified to solve the edge disconnectivity by gradually expanding the mask size of 3$\times$3 to 9$\times$9. A real scaled geometric quantity of the object was directly extracted from the 8-directional chain element. The external shape of the mushroom was analyzed and converted to the quantitative feature patterns. Efficient algorithms for the extraction of the selected feature patterns and the recognition of the front and back side were developed. The developed algorithms were coded in a menu driven way using MS_C language Ver.6.0, PC VISION PLUS library fuctions, and VGA graphic functions.

  • PDF

다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM (A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation)

  • 박근형;조형기
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

건표고의 외관특징 인식 및 추출 알고리즘 개발 (Development of Robust Feature Recognition and Extraction Algorithm for Dried Oak Mushrooms)

  • 이충호;황헌
    • Journal of Biosystems Engineering
    • /
    • 제21권3호
    • /
    • pp.325-335
    • /
    • 1996
  • 표고의 외관 특징들은 표고의 재배 시 생육상태의 정량적 측정을 위해서, 표고의 건조 시 건조 성능을 나타내는 정량적 지표로서, 그리고 건표고의 품질을 판정하는 요인으로서 중요한 역할을 한다. 본 논문에서는 컴퓨터 시각시스템 및 신경회로망 기술을 적용하여 표고의 갓 및 내피에 고루 분포되어 있는 외관특징을 정량적으로 추출하는 알고리즘을 개발하였다. 기존의 영상 처리 과정에서 유도되는 경험적 판정규칙 또는 명확한 수치적 판정조건에 의한 등급판정은 입력데이타의 결핍 또는 애매모호성에 따른 오차가 발생하기 쉽다. 신경회로망을 이용한 영상인식 기능을 도입함으로써 다양하고 애매모호한 표고의 외관 영상특징들을 효율적으로 처리하여 기존 영상처리 알고리즘에서 발생하는 오차를 개선하였다. 본 논문에서 제안하는 알고리즘은 표고의 갓과 내피면의 인식 및 특징 분할, 꼭지부의 검출, 제거 및 재생 등을 포함한다. 제안한 알고리즘에 의거하여 건표고의 등급판정에 주요한 품질인자들을 추출하고 정량화 하였다. 그리고 알고리즘의 개발은 흑백의 다치입력영상을 이용하여 수행하였다.

  • PDF

열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리 (Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram)

  • 신재호;전명환;김아영
    • 로봇학회논문지
    • /
    • 제18권3호
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.