• Title/Summary/Keyword: multi-scale segmentation

Search Result 56, Processing Time 0.021 seconds

Texture segmentation using Neural Networks and multi-scale Bayesian image segmentation technique (신경회로망과 다중스케일 Bayesian 영상 분할 기법을 이용한 결 분할)

  • Kim Tae-Hyung;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.39-48
    • /
    • 2005
  • This paper proposes novel texture segmentation method using Bayesian estimation method and neural networks. We use multi-scale wavelet coefficients and the context information of neighboring wavelets coefficients as the input of networks. The output of neural networks is modeled as a posterior probability. The context information is obtained by HMT(Hidden Markov Tree) model. This proposed segmentation method shows better performance than ML(Maximum Likelihood) segmentation using HMT model. And post-processed texture segmentation results as using multi-scale Bayesian image segmentation technique called HMTseg in each segmentation by HMT and the proposed method also show that the proposed method is superior to the method using HMT.

Multi-scale Image Segmentation Using MSER and its Application (MSER을 이용한 다중 스케일 영상 분할과 응용)

  • Lee, Jin-Seon;Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.3
    • /
    • pp.11-21
    • /
    • 2014
  • Multi-scale image segmentation is important in many applications such as image stylization and medical diagnosis. This paper proposes a novel segmentation algorithm based on MSER(maximally stable extremal region) which captures multi-scale structure and is stable and efficient. The algorithm collects MSERs and then partitions the image plane by redrawing MSERs in specific order. To denoise and smooth the region boundaries, hierarchical morphological operations are developed. To illustrate effectiveness of the algorithm's multi-scale structure, effects of various types of LOD control are shown for image stylization. The proposed technique achieves this without time-consuming multi-level Gaussian smoothing. The comparisons of segmentation quality and timing efficiency with mean shift-based Edison system are presented.

Image Segmentation using Multi-scale Normalized Cut (다중스케일 노멀라이즈 컷을 이용한 영상분할)

  • Lee, Jae-Hyun;Lee, Ji Eun;Park, Rae-Hong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.609-618
    • /
    • 2013
  • This paper proposes a fast image segmentation method that gives high segmentation performance as graph-cut based methods. Graph-cut based image segmentation methods show high segmentation performance, however, the computational complexity is high to solve a computationally-intensive eigen-system. This is because solving eigen-system depends on the size of square matrix obtained from similarities between all pairs of pixels in the input image. Therefore, the proposed method uses the small-size square matrix, which is obtained from all the similarities among regions obtained by segmenting locally an image into several regions by graph-based method. Experimental results show that the proposed multi-scale image segmentation method using the algebraic multi-grid shows higher performance than existing methods.

Multi-scale U-SegNet architecture with cascaded dilated convolutions for brain MRI Segmentation

  • Dayananda, Chaitra;Lee, Bumshik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.25-28
    • /
    • 2020
  • Automatic segmentation of brain tissues such as WM, GM, and CSF from brain MRI scans is helpful for the diagnosis of many neurological disorders. Accurate segmentation of these brain structures is a very challenging task due to low tissue contrast, bias filed, and partial volume effects. With the aim to improve brain MRI segmentation accuracy, we propose an end-to-end convolutional based U-SegNet architecture designed with multi-scale kernels, which includes cascaded dilated convolutions for the task of brain MRI segmentation. The multi-scale convolution kernels are designed to extract abundant semantic features and capture context information at different scales. Further, the cascaded dilated convolution scheme helps to alleviate the vanishing gradient problem in the proposed model. Experimental outcomes indicate that the proposed architecture is superior to the traditional deep-learning methods such as Segnet, U-net, and U-Segnet and achieves high performance with an average DSC of 93% and 86% of JI value for brain MRI segmentation.

  • PDF

Multi-scale context fusion network for melanoma segmentation

  • Zhenhua Li;Lei Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1888-1906
    • /
    • 2024
  • Aiming at the problems that the edge of melanoma image is fuzzy, the contrast with the background is low, and the hair occlusion makes it difficult to segment accurately, this paper proposes a model MSCNet for melanoma segmentation based on U-net frame. Firstly, a multi-scale pyramid fusion module is designed to reconstruct the skip connection and transmit global information to the decoder. Secondly, the contextural information conduction module is innovatively added to the top of the encoder. The module provides different receptive fields for the segmented target by using the hole convolution with different expansion rates, so as to better fuse multi-scale contextural information. In addition, in order to suppress redundant information in the input image and pay more attention to melanoma feature information, global channel attention mechanism is introduced into the decoder. Finally, In order to solve the problem of lesion class imbalance, this paper uses a combined loss function. The algorithm of this paper is verified on ISIC 2017 and ISIC 2018 public datasets. The experimental results indicate that the proposed algorithm has better accuracy for melanoma segmentation compared with other CNN-based image segmentation algorithms.

Texture Segmentation Using Statistical Characteristics of SOM and Multiscale Bayesian Image Segmentation Technique (SOM의 통계적 특성과 다중 스케일 Bayesian 영상 분할 기법을 이용한 텍스쳐 분할)

  • Kim Tae-Hyung;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.43-54
    • /
    • 2005
  • This paper proposes a novel texture segmentation method using Bayesian image segmentation method and SOM(Self Organization feature Map). Multi-scale wavelet coefficients are used as the input of SOM, and likelihood and a posterior probability for observations are obtained from trained SOMs. Texture segmentation is performed by a posterior probability from trained SOMs and MAP(Maximum A Posterior) classification. And the result of texture segmentation is improved by context information. This proposed segmentation method shows better performance than segmentation method by HMT(Hidden Markov Tree) model. The texture segmentation results by SOM and multi-sclae Bayesian image segmentation technique called HMTseg also show better performance than by HMT and HMTseg.

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.

Black Ice Detection Platform and Its Evaluation using Jetson Nano Devices based on Convolutional Neural Network (CNN)

  • Sun-Kyoung KANG;Yeonwoo LEE
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.1-8
    • /
    • 2023
  • In this paper, we propose a black ice detection platform framework using Convolutional Neural Networks (CNNs). To overcome black ice problem, we introduce a real-time based early warning platform using CNN-based architecture, and furthermore, in order to enhance the accuracy of black ice detection, we apply a multi-scale dilation convolution feature fusion (MsDC-FF) technique. Then, we establish a specialized experimental platform by using a comprehensive dataset of thermal road black ice images for a training and evaluation purpose. Experimental results of a real-time black ice detection platform show the better performance of our proposed network model compared to conventional image segmentation models. Our proposed platform have achieved real-time segmentation of road black ice areas by deploying a road black ice area segmentation network on the edge device Jetson Nano devices. This approach in parallel using multi-scale dilated convolutions with different dilation rates had faster segmentation speeds due to its smaller model parameters. The proposed MsCD-FF Net(2) model had the fastest segmentation speed at 5.53 frame per second (FPS). Thereby encouraging safe driving for motorists and providing decision support for road surface management in the road traffic monitoring department.

Modified Pyramid Scene Parsing Network with Deep Learning based Multi Scale Attention (딥러닝 기반의 Multi Scale Attention을 적용한 개선된 Pyramid Scene Parsing Network)

  • Kim, Jun-Hyeok;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.45-51
    • /
    • 2021
  • With the development of deep learning, semantic segmentation methods are being studied in various fields. There is a problem that segmenation accuracy drops in fields that require accuracy such as medical image analysis. In this paper, we improved PSPNet, which is a deep learning based segmentation method to minimized the loss of features during semantic segmentation. Conventional deep learning based segmentation methods result in lower resolution and loss of object features during feature extraction and compression. Due to these losses, the edge and the internal information of the object are lost, and there is a problem that the accuracy at the time of object segmentation is lowered. To solve these problems, we improved PSPNet, which is a semantic segmentation model. The multi-scale attention proposed to the conventional PSPNet was added to prevent feature loss of objects. The feature purification process was performed by applying the attention method to the conventional PPM module. By suppressing unnecessary feature information, eadg and texture information was improved. The proposed method trained on the Cityscapes dataset and use the segmentation index MIoU for quantitative evaluation. As a result of the experiment, the segmentation accuracy was improved by about 1.5% compared to the conventional PSPNet.