• 제목/요약/키워드: Texture Feature

검색결과 436건 처리시간 0.024초

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2003년도 Proceeding
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

슈퍼픽셀의 밀집도 및 텍스처정보를 이용한 DBSCAN기반 칼라영상분할 (A Method of Color Image Segmentation Based on DBSCAN(Density Based Spatial Clustering of Applications with Noise) Using Compactness of Superpixels and Texture Information)

  • 이정환
    • 디지털산업정보학회논문지
    • /
    • 제11권4호
    • /
    • pp.89-97
    • /
    • 2015
  • In this paper, a method of color image segmentation based on DBSCAN(Density Based Spatial Clustering of Applications with Noise) using compactness of superpixels and texture information is presented. The DBSCAN algorithm can generate clusters in large data sets by looking at the local density of data samples, using only two input parameters which called minimum number of data and distance of neighborhood data. Superpixel algorithms group pixels into perceptually meaningful atomic regions, which can be used to replace the rigid structure of the pixel grid. Each superpixel is consist of pixels with similar features such as luminance, color, textures etc. Superpixels are more efficient than pixels in case of large scale image processing. In this paper, superpixels are generated by SLIC(simple linear iterative clustering) as known popular. Superpixel characteristics are described by compactness, uniformity, boundary precision and recall. The compactness is important features to depict superpixel characteristics. Each superpixel is represented by Lab color spaces, compactness and texture information. DBSCAN clustering method applied to these feature spaces to segment a color image. To evaluate the performance of the proposed method, computer simulation is carried out to several outdoor images. The experimental results show that the proposed algorithm can provide good segmentation results on various images.

내용기반 영상검색에서 색과 질감을 나타내는 채널색에너지 (Channel Color Energy Feature Representing Color and Texture in Content-Based Image Retrieval)

  • 정재웅;권태완;박섭형
    • 대한전자공학회논문지SP
    • /
    • 제41권1호
    • /
    • pp.21-28
    • /
    • 2004
  • 내용기반 영상검색 분야에서 색, 질감, 모양 등과 같은 영상의 시각적인 내용을 표현하기 위하여 수치화한 특징들이 많이 제안되었다. 이런 특징들은 모두 독립적이라고 가정하기 때문에 한 특징 벡터를 추출할 때는 다른 특징들과의 상관성을 전혀 고려하지 않는다. 이 논문에서는 색과 질감 사이의 관계를 고려하여 새로운 CCE(channel color energy) 특징을 제안한다. 자연 영상을 대상으로 한 실험결과를 분석한 결과 제안하는 방법이 정규 가중거리 비교 방법과 SCFT(sequential chromatic Fourier transform) 기반 색 질감 방법에 비해 우수한 성능을 보이는 것을 확인할 수 있었다.

Identification of Transformed Image Using the Composition of Features

  • Yang, Won-Keun;Cho, A-Young;Cho, Ik-Hwan;Oh, Weon-Geun;Jeong, Dong-Seok
    • 한국멀티미디어학회논문지
    • /
    • 제11권6호
    • /
    • pp.764-776
    • /
    • 2008
  • Image identification is the process of checking whether the query image is the transformed version of the specific original image or not. In this paper, image identification method based on feature composition is proposed. Used features include color distance, texture information and average pixel intensity. We extract color characteristics using color distance and texture information by Modified Generalized Symmetry Transform as well as average intensity of each pixel as features. Individual feature is quantized adaptively to be used as bins of histogram. The histogram is normalized according to data type and it is used as the signature in comparing the query image with database images. In matching part, Manhattan distance is used for measuring distance between two signatures. To evaluate the performance of the proposed method, independent test and accuracy test are achieved. In independent test, 60,433 images are used to evaluate the ability of discrimination between different images. And 4,002 original images and its 29 transformed versions are used in accuracy test, which evaluate the ability that the proposed algorithm can find the original image correctly when some transforms was applied in original image. Experiment results show that the proposed identification method has good performance in accuracy test. And the proposed method is very useful in real environment because of its high accuracy and fast matching capacity.

  • PDF

Change Detection of the Tonle Sap Floodplain, Cambodia, using ALOS PALSAR Data

  • Trung, Nguyen Van;Choi, Jung-Hyun;Won, Joong-Sun
    • 대한원격탐사학회지
    • /
    • 제26권3호
    • /
    • pp.287-295
    • /
    • 2010
  • Water level of the Tonle Sap is largely influenced by the Mekong River. During the wet season, the lacustrine landform and vegetated areas are covered with water. Change detection in this area provides information required for human activities and sustainable development around the Tonle Sap. In order to detect the changes in the Tonle Sap floodplain, fifteen ALOS-PALSAR L-band data acquired from January 2007 to January 2009 and examined in this study. Since L-band is able to penetrate into vegetation cover, it enables us to study the changes according to water level of floodplain developed in the rainforest. Four types of images were constructed and studied include 1) ratio images, 2) correlation coefficient images, 3) texture feature ratio images and 4) multi-color composite images. Change images (in each 46 day interval) extracted from the ratio images, coherence images and texture feature ratio images were formed for detecting land cover change. Two RGB images are also obtained by compositing three images acquired in the early, in the middle and at the end of the rainy season in 2007 and 2008. Combination of the methods results that the change images present the relationship between vegetation and water level, leaf fall forest as well as cultivation and harvest crop.

Image-Based Maritime Obstacle Detection Using Global Sparsity Potentials

  • Mou, Xiaozheng;Wang, Han
    • Journal of information and communication convergence engineering
    • /
    • 제14권2호
    • /
    • pp.129-135
    • /
    • 2016
  • In this paper, we present a novel algorithm for image-based maritime obstacle detection using global sparsity potentials (GSPs), in which "global" refers to the entire sea area. The horizon line is detected first to segment the sea area as the region of interest (ROI). Considering the geometric relationship between the camera and the sea surface, variable-size image windows are adopted to sample patches in the ROI. Then, each patch is represented by its texture feature, and its average distance to all the other patches is taken as the value of its GSP. Thereafter, patches with a smaller GSP are clustered as the sea surface, and patches with a higher GSP are taken as the obstacle candidates. Finally, the candidates far from the mean feature of the sea surface are selected and aggregated as the obstacles. Experimental results verify that the proposed approach is highly accurate as compared to other methods, such as the traditional feature space reclustering method and a state-of-the-art saliency detection method.

랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출 (Feature Extraction Using Convolutional Neural Networks for Random Translation)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제23권3호
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.