• 제목/요약/키워드: Texture Feature

검색결과 435건 처리시간 0.035초

축소변환된 의료 이미지의 질감 특징 추출과 인덱싱 (An Extracting and Indexing Schema of Compressed Medical Images)

  • 위희정;엄기현
    • 한국멀티미디어학회:학술대회논문집
    • /
    • 한국멀티미디어학회 2000년도 춘계학술발표논문집
    • /
    • pp.328-331
    • /
    • 2000
  • In this paper , we propose a texture feature extraction method of reduce the massive computational time on extracting texture, features of large sized medical such as MRI, CT-scan , and an index structure, called GLTFT, to speed up the retrieval performance. For these, the original image is transformed into a compressed image by Wavelet transform , and textural features such as contrast, energy, entropy, and homogeneity of the compressed image is extracted by using GLCM(Gray Level Co-occurrence Metrix) . The proposed index structure is organized by using the textural features. The processing in compressed domain can give the solution of storage space and the reduction of computational time of feature extracting . And , by GLTFT index structure, image retrieval performance can be expected to be improved by reducing the retrieval range . Our experiment on 270 MRIs as image database shows that shows that such expectation can be got.

  • PDF

무인차량 적용을 위한 영상 기반의 지형 분류 기법 (Vision Based Outdoor Terrain Classification for Unmanned Ground Vehicles)

  • 성기열;곽동민;이승연;유준
    • 제어로봇시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.372-378
    • /
    • 2009
  • For effective mobility control of unmanned ground vehicles in outdoor off-road environments, terrain cover classification technology using passive sensors is vital. This paper presents a novel method far terrain classification based on color and texture information of off-road images. It uses a neural network classifier and wavelet features. We exploit the wavelet mean and energy features extracted from multi-channel wavelet transformed images and also utilize the terrain class spatial coordinates of images to include additional features. By comparing the classification performance according to applied features, the experimental results show that the proposed algorithm has a promising result and potential possibilities for autonomous navigation.

Sparse Representation based Two-dimensional Bar Code Image Super-resolution

  • Shen, Yiling;Liu, Ningzhong;Sun, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2109-2123
    • /
    • 2017
  • This paper presents a super-resolution reconstruction method based on sparse representation for two-dimensional bar code images. Considering the features of two-dimensional bar code images, Kirsch and LBP (local binary pattern) operators are used to extract the edge gradient and texture features. Feature extraction is constituted based on these two features and additional two second-order derivatives. By joint dictionary learning of the low-resolution and high-resolution image patch pairs, the sparse representation of corresponding patches is the same. In addition, the global constraint is exerted on the initial estimation of high-resolution image which makes the reconstructed result closer to the real one. The experimental results demonstrate the effectiveness of the proposed algorithm for two-dimensional bar code images by comparing with other reconstruction algorithms.

변형된 지역 Gabor Feature를 이용한 VQ 기반의 영상 검색 (Image Retrieval using VQ based Local Modified Gabor Feature)

  • 신대규;김현술;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2634-2636
    • /
    • 2001
  • This paper proposes a new method of retrieving images from large image databases. The method is based on VQ(Vector Quantization) of local texture information at interest points automatically detected in an image. The texture features are extracted by Gabor wavelet filter bank, and rearranged for rotation. These features are classified by VQ and then construct a pattern histogram. Retrievals are performed by just comparing pattern histograms between images. Experimental results have shown the robustness of the proposed method to image rotation, small scale change, noise addition and brightness change and also shown the possibility of the retrieval by a partial image.

  • PDF

A Classification Technique for Panchromatic Imagery Using Independent Component Analysis Feature Extraction

  • Byoun, Seung-Gun;Lee, Ho-Yong;Kim, Min;Lee, Kwae-Hi
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.23-28
    • /
    • 2002
  • Among effective feature extraction methods from the small-patched image set, independent component analysis (ICA) is recently well known stochastic manner to find informative basis images. The ICA simultaneously learns both basis images and independent components using high order statistic manners, because that information underlying between pixels are sensitive to high-order statistic models. The topographic ICA model is adapted in our experiment. This paper deals with an unsupervised classification strategies using learned ICA basis images. The experimental result by proposed classification technique shows superior performance than classic texture analysis techniques for the panchromatic KOMPSAT imagery.

  • PDF

신경망 기반의 텍스춰 분석을 이용한 효율적인 문자 추출 (Efficient Text Localization using MLP-based Texture Classification)

  • 정기철;김광인;한정현
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제29권3호
    • /
    • pp.180-191
    • /
    • 2002
  • 본 논문은 MLP와 MultiCAMShift 알고리즘을 이용한 텍스춰 기반의 영상 내 문자 추출 방법을 제안한다. MLP를 이용한 텍스춰 분석기는 별도의 특징값 추출 단계 없이 다양한 환경의 입력 영상에 대해 효과적으로 문자 확률 영상을 생성하며, 문자 확률 영상 상에서 수행되는 MultiCAMShift 알고리즘은 국소 탐색만으로 효율적으로 문자 영역을 추출할 수 있다.

A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour

  • Kim, Sung Gyun;Seo, Yeong Geon
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.103-116
    • /
    • 2013
  • Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound(TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a method for automatic prostate segmentation in TRUS images using Gabor feature extraction and snake-like contour is presented. This method involves preprocessing, extracting Gabor feature, training, and prostate segmentation. The speckle reduction for preprocessing step has been achieved by using stick filter and top-hat transform has been implemented for smoothing the contour. A Gabor filter bank for extraction of rotation-invariant texture features has been implemented. A support vector machine(SVM) for training step has been used to get each feature of prostate and nonprostate. Finally, the boundary of prostate is extracted by the snake-like contour algorithm. A number of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with less than 10.2% of the accuracy which is relative to boundary provided manually by experts.

An Improved Texture Feature Extraction Method for Recognizing Emphysema in CT Images

  • Peng, Shao-Hu;Nam, Hyun-Do
    • 조명전기설비학회논문지
    • /
    • 제24권11호
    • /
    • pp.30-41
    • /
    • 2010
  • In this study we propose a new texture feature extraction method based on an estimation of the brightness and structural uniformity of CT images representing the important characteristics for emphysema recognition. The Center-Symmetric Local Binary Pattern (CS-LBP) is first used to combine gray level in order to describe the brightness uniformity characteristics of the CT image. Then the gradient orientation difference is proposed to generate another CS-LBP code combining with gray level to represent the structural uniformity characteristics of the CT image. The usage of the gray level, CS-LBP and gradient orientation differences enables the proposed method to extract rich and distinctive information from the CT images in multiple directions. Experimental results showed that the performance of the proposed method is more stable with respect to sensitivity and specificity when compared with the SGLDM, GLRLM and GLDM. The proposed method outperformed these three conventional methods (SGLDM, GLRLM, and GLDM) 7.85[%], 22.87[%], and 16.67[%] respectively, according to the diagnosis of average accuracy, demonstrated by the Receiver Operating Characteristic (ROC) curves.

그래픽 하드웨어 가속을 이용한 실시간 색상 인식 (Real-time Color Recognition Based on Graphic Hardware Acceleration)

  • 김구진;윤지영;최유주
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제14권1호
    • /
    • pp.1-12
    • /
    • 2008
  • 본 논문에서는 야외 및 실내에서 촬영된 차량 영상에 대해 실시간으로 차량 색상을 인식할 수 있는 GPU(Graphics Processing Unit) 기반의 알고리즘을 제시한다. 전처리 과정에서는 차량 색상의 표본 영상들로부터 특징벡터를 계산한 뒤, 이들을 색상 별로 조합하여 GPU에서 사용할 참조 텍스쳐(Reference texture)로 저장한다. 차량 영상이 입력되면, 특징벡터를 계산한 뒤 GPU로 전송하고, GPU에서는 참조 텍스쳐 내의 표본 특징리터들과 비교하여 색상 별 유사도를 측정한 뒤 CPU로 전송하여 해당 색상명을 인식한다. 분류의 대상이 되는 색상은 가장 흔히 발견되는 차량 색상들 중에서 선택한 7가지 색상이며, 검정색, 은색, 흰색과 같은 3가지의 무채색과 빨강색, 노랑색, 파랑색, 녹색과 같은 4가지의 유채색으로 구성된다. 차량 영상에 대한 특징벡터는 차량 영상에 대해 HSI(Hue-Saturation-Intensity) 색상모델을 적용하여 색조-채도 조합과 색조-명도 조합으로 색상 히스토램을 구성하고, 이 중의 채도 값에 가중치를 부여함으로써 구성한다. 본 논문에서 제시하는 알고리즘은 다양한 환경에서 촬영된 많은 수의 표본 특징벡터를 사용하고, 색상 별 특성을 뚜렷이 반영하는 특징벡터를 구성하였으며, 적합한 유사도 측정함수(likelihood function)를 적용함으로써, 94.67%에 이르는 색상 인식 성공률을 보였다. 또한, GPU를 이용함으로써 대량의 표본 특징벡터의 집합과 입력 영상에 대한 특징벡터 간의 유사도 측정 및 색상 인식과정을 병렬로 처리하였다. 실험에서는, 색상 별로 1,024장씩, 총 7,168장의 차량 표본 영상을 이용하여 GPU에서 사용하는 참조 텍스쳐를 구성하였다. 특징벡터의 구성에 소요되는 시간은 입력 영상의 크기에 따라 다르지만, 해상도 $150{\times}113$의 입력 영상에 대해 측정한 결과 평균 0.509ms가 소요된다. 계산된 특징벡터를 이용하여 색상 인식의 수행시간을 계산한 결과 평균 2.316ms의 시간이 소요되었고, 이는 같은 알고리즘을 CPU 상에서 수행한 결과에 비해 5.47배 빠른 속도이다. 본 연구에서는 차량만을 대상으로 하여 색상 인식을 실험하였으나, 일반적인 피사체의 색상 인식에 대해서도 제시된 알고리즘을 확장하여 적용할 수 있다.