• Title/Summary/Keyword: classification-based segmentation

Search Result 294, Processing Time 0.025 seconds

Development of ResNet-based WBC Classification Algorithm Using Super-pixel Image Segmentation

  • Lee, Kyu-Man;Kang, Soon-Ah
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.147-153
    • /
    • 2018
  • In this paper, we propose an efficient WBC 14-Diff classification which performs using the WBC-ResNet-152, a type of CNN model. The main point of view is to use Super-pixel for the segmentation of the image of WBC, and to use ResNet for the classification of WBC. A total of 136,164 blood image samples (224x224) were grouped for image segmentation, training, training verification, and final test performance analysis. Image segmentation using super-pixels have different number of images for each classes, so weighted average was applied and therefore image segmentation error was low at 7.23%. Using the training data-set for training 50 times, and using soft-max classifier, TPR average of 80.3% for the training set of 8,827 images was achieved. Based on this, using verification data-set of 21,437 images, 14-Diff classification TPR average of normal WBCs were at 93.4% and TPR average of abnormal WBCs were at 83.3%. The result and methodology of this research demonstrates the usefulness of artificial intelligence technology in the blood cell image classification field. WBC-ResNet-152 based morphology approach is shown to be meaningful and worthwhile method. And based on stored medical data, in-depth diagnosis and early detection of curable diseases is expected to improve the quality of treatment.

Object oriented classification using Landsat images

  • Yoon, Geun-Won;Cho, Seong-Ik;Jeong, Soo;Park, Jong-Hyun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.204-206
    • /
    • 2003
  • In order to utilize remote sensed images effectively, a lot of image classification methods are suggested for many years. But, the accuracy of traditional methods based on pixel-based classification is not high in general. In this study, object oriented classification based on image segmentation is used to classify Landsat images. A necessary prerequisite for object oriented image classification is successful image segmentation. Object oriented image classification, which is based on fuzzy logic, allows the integration of a broad spectrum of different object features, such as spectral values , shape and texture. Landsat images are divided into urban, agriculture, forest, grassland, wetland, barren and water in sochon-gun, Chungcheongnam-do using object oriented classification algorithms in this paper. Preliminary results will help to perform an automatic image classification in the future.

  • PDF

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • v.11 no.3
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images

  • Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.3
    • /
    • pp.173-179
    • /
    • 2015
  • In this paper, object-based classification of urban areas based on a combination of information from lidar and aerial images is introduced. High resolution images are frequently used in automatic classification, making use of the spectral characteristics of the features under study. However, in urban areas, pixel-based classification can be difficult since building colors differ and the shadows of buildings can obscure building segmentation. Therefore, if the boundaries of buildings can be extracted from lidar, this information could improve the accuracy of urban area classifications. In the data processing stage, lidar data and the aerial image are co-registered into the same coordinate system, and a local maxima filter is used for the building segmentation of lidar data, which are then converted into an image containing only building information. Then, multiresolution segmentation is achieved using a scale parameter, and a color and shape factor; a compactness factor and a layer weight are implemented for the classification using a class hierarchy. Results indicate that lidar can provide useful additional data when combined with high resolution images in the object-oriented hierarchical classification of urban areas.

Background Subtraction for Moving Cameras based on trajectory-controlled segmentation and Label Inference

  • Yin, Xiaoqing;Wang, Bin;Li, Weili;Liu, Yu;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4092-4107
    • /
    • 2015
  • We propose a background subtraction method for moving cameras based on trajectory classification, image segmentation and label inference. In the trajectory classification process, PCA-based outlier detection strategy is used to remove the outliers in the foreground trajectories. Combining optical flow trajectory with watershed algorithm, we propose a trajectory-controlled watershed segmentation algorithm which effectively improves the edge-preserving performance and prevents the over-smooth problem. Finally, label inference based on Markov Random field is conducted for labeling the unlabeled pixels. Experimental results on the motionseg database demonstrate the promising performance of the proposed approach compared with other competing methods.

Enhanced CNN Model for Brain Tumor Classification

  • Kasukurthi, Aravinda;Paleti, Lakshmikanth;Brahmaiah, Madamanchi;Sree, Ch.Sudha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.143-148
    • /
    • 2022
  • Brain tumor classification is an important process that allows doctors to plan treatment for patients based on the stages of the tumor. To improve classification performance, various CNN-based architectures are used for brain tumor classification. Existing methods for brain tumor segmentation suffer from overfitting and poor efficiency when dealing with large datasets. The enhanced CNN architecture proposed in this study is based on U-Net for brain tumor segmentation, RefineNet for pattern analysis, and SegNet architecture for brain tumor classification. The brain tumor benchmark dataset was used to evaluate the enhanced CNN model's efficiency. Based on the local and context information of the MRI image, the U-Net provides good segmentation. SegNet selects the most important features for classification while also reducing the trainable parameters. In the classification of brain tumors, the enhanced CNN method outperforms the existing methods. The enhanced CNN model has an accuracy of 96.85 percent, while the existing CNN with transfer learning has an accuracy of 94.82 percent.

Classification Strategies for High Resolution Images of Korean Forests: A Case Study of Namhansansung Provincial Park, Korea

  • Park, Chong-Hwa;Choi, Sang-Il
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.708-708
    • /
    • 2002
  • Recent developments in sensor technologies have provided remotely sensed data with very high spatial resolution. In order to fully utilize the potential of high resolution images, new image classification strategies are necessary. Unfortunately, the high resolution images increase the spectral within-field variability, and the classification accuracy of traditional methods based on pixel-based classification algorithms such as Maximum-Likelihood method may be decreased (Schiewe 2001). Recent development in Object Oriented Classification based on image segmentation algorithms can be used for the classification of forest patches on rugged terrain of Korea. The objectives of this paper are as follows. First, to compare the pros and cons of image classification methods based on pixel-based and object oriented classification algorithm for the forest patch classification. Landsat ETM+ data and IKONOS data will be used for the classification. Second, to investigate ways to increase classification accuracy of forest patches. Supplemental data such as DTM and Forest Type Map of 1:25,000 scale are used for topographic correction and image segmentation. Third, to propose the best classification strategy for forest patch classification in terms of accuracy and data requirement. The research site for this paper is Namhansansung Provincial Park located at the eastern suburb of Seoul Metropolitan City for its diverse forest patch types and data availability. Both Landsat ETM+ and IKONOS data are used for the classification. Preliminary results can be summarized as follows. First, topographic correction of reflectance is essential for the classification of forest patches on rugged terrain. Second, object oriented classification of IKONOS data enables higher classification accuracy compared to Landsat ETM+ and pixel-based classification. Third, multi-stage segmentation is very useful to investigate landscape ecological aspect of forest communities of Korea.

  • PDF

Document Image Segmentation and Classification using Texture Features and Structural Information (텍스쳐 특징과 구조적인 정보를 이용한 문서 영상의 분할 및 분류)

  • Park, Kun-Hye;Kim, Bo-Ram;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.215-220
    • /
    • 2010
  • In this paper, we propose a new texture-based page segmentation and classification method in which table region, background region, image region and text region in a given document image are automatically identified. The proposed method for document images consists of two stages, document segmentation and contents classification. In the first stage, we segment the document image, and then, we classify contents of document in the second stage. The proposed classification method is based on a texture analysis. Each contents in the document are considered as regions with different textures. Thus the problem of classification contents of document can be posed as a texture segmentation and analysis problem. Two-dimensional Gabor filters are used to extract texture features for each of these regions. Our method does not assume any a priori knowledge about content or language of the document. As we can see experiment results, our method gives good performance in document segmentation and contents classification. The proposed system is expected to apply such as multimedia data searching, real-time image processing.

Segmentation and Classification of Lidar data

  • Tseng, Yi-Hsing;Wang, Miao
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.153-155
    • /
    • 2003
  • Laser scanning has become a viable technique for the collection of a large amount of accurate 3D point data densely distributed on the scanned object surface. The inherent 3D nature of the sub-randomly distributed point cloud provides abundant spatial information. To explore valuable spatial information from laser scanned data becomes an active research topic, for instance extracting digital elevation model, building models, and vegetation volumes. The sub-randomly distributed point cloud should be segmented and classified before the extraction of spatial information. This paper investigates some exist segmentation methods, and then proposes an octree-based split-and-merge segmentation method to divide lidar data into clusters belonging to 3D planes. Therefore, the classification of lidar data can be performed based on the derived attributes of extracted 3D planes. The test results of both ground and airborne lidar data show the potential of applying this method to extract spatial features from lidar data.

  • PDF

Semantic Image Segmentation Combining Image-level and Pixel-level Classification (영상수준과 픽셀수준 분류를 결합한 영상 의미분할)

  • Kim, Seon Kuk;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.