• Title/Summary/Keyword: Texture Image segmentation

Search Result 144, Processing Time 0.023 seconds

An Optimal 2D Quadrature Polar Separable Filter for Texture Analysis (조직분석을 위한 최적 2차원 Quadrature Polar Separable 필터)

  • 이상신;문용선;박종안
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.3
    • /
    • pp.288-296
    • /
    • 1992
  • This paper describes an improved 2D QPS(quadrature polar separable) filter design and its applications to texture processing. The filter kernel pair consists of the product of a radial weighting function based on the finite PSS (prolate spheroidal sequences) and an exponential at tenuation function for the orientational angle. It is quadrature and polar separable in the frequency domain. It is near optimal in the energy loss because we let the orientational angle function approximate the radial weighting function. The filter frequency characteristics is easy to control as it depends only upon the design specifications such as the bandwidth, the directional angle, the attenuation constant, and the shift constant of the central frequency. Some applications of the filter in texture processing, such as the generation of the texture image, the estimation of orientation angles, and the segmentations for the synthetic texture image, are considered. The result shows that the filter with the wide bandwidth can be used for the generation of discrimination of the strong orientational textures and the segmentation results are good.

  • PDF

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

Region-based Image Retrieval Algorithm Using Image Segmentation and Multi-Feature (영상분할과 다중 특징을 이용한 영역기반 영상검색 알고리즘)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.57-63
    • /
    • 2009
  • The rapid growth of computer-based image database, necessity of a system that can manage an image information is increasing. This paper presents a region-based image retrieval method using the combination of color(autocorrelogram), texture(CWT moments) and shape(Hu invariant moments) features. As a color feature, a color autocorrelogram is chosen by extracting from the hue and saturation components of a color image(HSV). As a texture, shape and position feature are extracted from the value component. For efficient similarity confutation, the extracted features(color autocorrelogram, Hu invariant moments, and CWT moments) are combined and then precision and recall are measured. Experiment results for Corel and VisTex DBs show that the proposed image retrieval algorithm has 94.8% Precision, 90.7% recall and can successfully apply to image retrieval system.

Fast Simulated Annealing Algorithm (Simulated Annealing의 수렴속도 개선에 관한 연구)

  • 정철곤;김중규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.284-289
    • /
    • 2002
  • In this paper, we propose the fast simulated annealing algorithm to decrease convergence rate in image segmentation using MRF. Simulated annealing algorithm has a good performance in noisy image or texture image, But there is a problem to have a long convergence rate. To fad a solution to this problem, we have labeled each pixel adaptively according to its intensity before simulated annealing. Then, we show the superiority of proposed method through experimental results.

Modified Pyramid Scene Parsing Network with Deep Learning based Multi Scale Attention (딥러닝 기반의 Multi Scale Attention을 적용한 개선된 Pyramid Scene Parsing Network)

  • Kim, Jun-Hyeok;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.45-51
    • /
    • 2021
  • With the development of deep learning, semantic segmentation methods are being studied in various fields. There is a problem that segmenation accuracy drops in fields that require accuracy such as medical image analysis. In this paper, we improved PSPNet, which is a deep learning based segmentation method to minimized the loss of features during semantic segmentation. Conventional deep learning based segmentation methods result in lower resolution and loss of object features during feature extraction and compression. Due to these losses, the edge and the internal information of the object are lost, and there is a problem that the accuracy at the time of object segmentation is lowered. To solve these problems, we improved PSPNet, which is a semantic segmentation model. The multi-scale attention proposed to the conventional PSPNet was added to prevent feature loss of objects. The feature purification process was performed by applying the attention method to the conventional PPM module. By suppressing unnecessary feature information, eadg and texture information was improved. The proposed method trained on the Cityscapes dataset and use the segmentation index MIoU for quantitative evaluation. As a result of the experiment, the segmentation accuracy was improved by about 1.5% compared to the conventional PSPNet.

Content-based Image Retrieval using Feature Extraction in Wavelet Transform Domain (웨이브릿 변환 영역에서 특징추출을 이용한 내용기반 영상 검색)

  • 최인호;이상훈
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.4
    • /
    • pp.415-425
    • /
    • 2002
  • In this paper, we present a content-based image retrieval method which is based on the feature extraction in the wavelet transform domain. In order to overcome the drawbacks of the feature vector making up methods which use the global wavelet coefficients in subbands, we utilize the energy value of wavelet coefficients, and the shape-based retrieval of objects is processed by moment which is invariant in translation, scaling, rotation of the objects The proposed methods reduce feature vector size, and make progress performance of classification retrieval which provides fast retrievals times. To offer the abilities of region-based image retrieval, we discussed the image segmentation method which can reduce the effect of an irregular light sources. The image segmentation method uses a region-merging, and candidate regions which are merged were selected by the energy values of high frequency bands in discrete wavelet transform. The region-based image retrieval is executed by using the segmented region information, and the images are retrieved by a color, texture, shape feature vector.

  • PDF

Segmentation and estimation of surfaces from statistical probability of texture features

  • Terauchi, Mutsuhiro;Nagamachi, Mitsuo;Koji-Ito;Tsuji, Toshio
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.826-831
    • /
    • 1988
  • This paper presents an approach to segment an image into areas of surfaces, and to compute the surface properties from a gray-scale image in order to describe the surfaces for reconstruction of the 3-D shape of the objects. In general, an rigid body has several surfaces and many edges. But if it is not polyhedoron, it is necessary not only to describe the relation between surfaces, i.e. its line drawings but also to represent the surfaces' equations itself. In order to compute the surfaces' equation we use a probability of edge distribution. At first it is extracted edges from a gray-level image as much as possible. These are not only the points that maximize the change of an image intensuty but candidates which can be seemed to be edges. Next, other character of a surface (color, coordinates and image intensity) are extracted. In our study, we call the all feature of a surface as "texture", for example color, intensity level, orientation of an edge, shape of a surface and so on. These features of a surface on a pixel of an image plane are mapped to a point of the feature space, and segmented to each groups by cluster analysis on this space. These groups are considered to represent object surface in an image plane. Finally, the states of object surface in 3-D space are computed from distributional probability of local and overall statistical features of a surface, and from shape of a surface.a surface.

  • PDF

Segmentation and Visualization of Human Anatomy using Medical Imagery (의료영상을 이용한 인체장기의 분할 및 시각화)

  • Lee, Joon-Ku;Kim, Yang-Mo;Kim, Do-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.1
    • /
    • pp.191-197
    • /
    • 2013
  • Conventional CT and MRI scans produce cross-section slices of body that are viewed sequentially by radiologists who must imagine or extrapolate from these views what the 3 dimensional anatomy should be. By using sophisticated algorithm and high performance computing, these cross-sections may be rendered as direct 3D representations of human anatomy. The 2D medical image analysis forced to use time-consuming, subjective, error-prone manual techniques, such as slice tracing and region painting, for extracting regions of interest. To overcome the drawbacks of 2D medical image analysis, combining with medical image processing, 3D visualization is essential for extracting anatomical structures and making measurements. We used the gray-level thresholding, region growing, contour following, deformable model to segment human organ and used the feature vectors from texture analysis to detect harmful cancer. We used the perspective projection and marching cube algorithm to render the surface from volumetric MR and CT image data. The 3D visualization of human anatomy and segmented human organ provides valuable benefits for radiation treatment planning, surgical planning, surgery simulation, image guided surgery and interventional imaging applications.

Implementation of Digital Image Processing for Coastline Extraction from Synthetic Aperture Radar Imagery

  • Lee, Dong-Cheon;Seo, Su-Young;Lee, Im-Pyeong;Kwon, Jay-Hyoun;Tuell, Grady H.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.517-528
    • /
    • 2007
  • Extraction of the coastal boundary is important because the boundary serves as a reference in the demarcation of maritime zones such as territorial sea, contiguous zone, and exclusive economic zone. Accurate nautical charts also depend on well established, accurate, consistent, and current coastline delineation. However, to identify the precise location of the coastal boundary is a difficult task due to tidal and wave motions. This paper presents an efficient way to extract coastlines by applying digital image processing techniques to Synthetic Aperture Radar (SAR) imagery. Over the past few years, satellite-based SAR and high resolution airborne SAR images have become available, and SAR has been evaluated as a new mapping technology. Using remotely sensed data gives benefits in several aspects, especially SAR is largely unaffected by weather constraints, is operational at night time over a large area, and provides high contrast between water and land areas. Various image processing techniques including region growing, texture-based image segmentation, local entropy method, and refinement with image pyramid were implemented to extract the coastline in this study. Finally, the results were compared with existing coastline data derived from aerial photographs.

Pavement Crack Detection and Segmentation Based on Deep Neural Network

  • Nguyen, Huy Toan;Yu, Gwang Hyun;Na, Seung You;Kim, Jin Young;Seo, Kyung Sik
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.9
    • /
    • pp.99-112
    • /
    • 2019
  • Cracks on pavement surfaces are critical signs and symptoms of the degradation of pavement structures. Image-based pavement crack detection is a challenging problem due to the intensity inhomogeneity, topology complexity, low contrast, and noisy texture background. In this paper, we address the problem of pavement crack detection and segmentation at pixel-level based on a Deep Neural Network (DNN) using gray-scale images. We propose a novel DNN architecture which contains a modified U-net network and a high-level features network. An important contribution of this work is the combination of these networks afforded through the fusion layer. To the best of our knowledge, this is the first paper introducing this combination for pavement crack segmentation and detection problem. The system performance of crack detection and segmentation is enhanced dramatically by using our novel architecture. We thoroughly implement and evaluate our proposed system on two open data sets: the Crack Forest Dataset (CFD) and the AigleRN dataset. Experimental results demonstrate that our system outperforms eight state-of-the-art methods on the same data sets.