• 제목/요약/키워드: classification-based segmentation

검색결과 294건 처리시간 0.024초

Development of ResNet-based WBC Classification Algorithm Using Super-pixel Image Segmentation

  • Lee, Kyu-Man;Kang, Soon-Ah
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권4호
    • /
    • pp.147-153
    • /
    • 2018
  • In this paper, we propose an efficient WBC 14-Diff classification which performs using the WBC-ResNet-152, a type of CNN model. The main point of view is to use Super-pixel for the segmentation of the image of WBC, and to use ResNet for the classification of WBC. A total of 136,164 blood image samples (224x224) were grouped for image segmentation, training, training verification, and final test performance analysis. Image segmentation using super-pixels have different number of images for each classes, so weighted average was applied and therefore image segmentation error was low at 7.23%. Using the training data-set for training 50 times, and using soft-max classifier, TPR average of 80.3% for the training set of 8,827 images was achieved. Based on this, using verification data-set of 21,437 images, 14-Diff classification TPR average of normal WBCs were at 93.4% and TPR average of abnormal WBCs were at 83.3%. The result and methodology of this research demonstrates the usefulness of artificial intelligence technology in the blood cell image classification field. WBC-ResNet-152 based morphology approach is shown to be meaningful and worthwhile method. And based on stored medical data, in-depth diagnosis and early detection of curable diseases is expected to improve the quality of treatment.

Object oriented classification using Landsat images

  • Yoon, Geun-Won;Cho, Seong-Ik;Jeong, Soo;Park, Jong-Hyun
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.204-206
    • /
    • 2003
  • In order to utilize remote sensed images effectively, a lot of image classification methods are suggested for many years. But, the accuracy of traditional methods based on pixel-based classification is not high in general. In this study, object oriented classification based on image segmentation is used to classify Landsat images. A necessary prerequisite for object oriented image classification is successful image segmentation. Object oriented image classification, which is based on fuzzy logic, allows the integration of a broad spectrum of different object features, such as spectral values , shape and texture. Landsat images are divided into urban, agriculture, forest, grassland, wetland, barren and water in sochon-gun, Chungcheongnam-do using object oriented classification algorithms in this paper. Preliminary results will help to perform an automatic image classification in the future.

  • PDF

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • 제11권3호
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images

  • Lee, Won Hee
    • 한국측량학회지
    • /
    • 제33권3호
    • /
    • pp.173-179
    • /
    • 2015
  • In this paper, object-based classification of urban areas based on a combination of information from lidar and aerial images is introduced. High resolution images are frequently used in automatic classification, making use of the spectral characteristics of the features under study. However, in urban areas, pixel-based classification can be difficult since building colors differ and the shadows of buildings can obscure building segmentation. Therefore, if the boundaries of buildings can be extracted from lidar, this information could improve the accuracy of urban area classifications. In the data processing stage, lidar data and the aerial image are co-registered into the same coordinate system, and a local maxima filter is used for the building segmentation of lidar data, which are then converted into an image containing only building information. Then, multiresolution segmentation is achieved using a scale parameter, and a color and shape factor; a compactness factor and a layer weight are implemented for the classification using a class hierarchy. Results indicate that lidar can provide useful additional data when combined with high resolution images in the object-oriented hierarchical classification of urban areas.

Background Subtraction for Moving Cameras based on trajectory-controlled segmentation and Label Inference

  • Yin, Xiaoqing;Wang, Bin;Li, Weili;Liu, Yu;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권10호
    • /
    • pp.4092-4107
    • /
    • 2015
  • We propose a background subtraction method for moving cameras based on trajectory classification, image segmentation and label inference. In the trajectory classification process, PCA-based outlier detection strategy is used to remove the outliers in the foreground trajectories. Combining optical flow trajectory with watershed algorithm, we propose a trajectory-controlled watershed segmentation algorithm which effectively improves the edge-preserving performance and prevents the over-smooth problem. Finally, label inference based on Markov Random field is conducted for labeling the unlabeled pixels. Experimental results on the motionseg database demonstrate the promising performance of the proposed approach compared with other competing methods.

Enhanced CNN Model for Brain Tumor Classification

  • Kasukurthi, Aravinda;Paleti, Lakshmikanth;Brahmaiah, Madamanchi;Sree, Ch.Sudha
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.143-148
    • /
    • 2022
  • Brain tumor classification is an important process that allows doctors to plan treatment for patients based on the stages of the tumor. To improve classification performance, various CNN-based architectures are used for brain tumor classification. Existing methods for brain tumor segmentation suffer from overfitting and poor efficiency when dealing with large datasets. The enhanced CNN architecture proposed in this study is based on U-Net for brain tumor segmentation, RefineNet for pattern analysis, and SegNet architecture for brain tumor classification. The brain tumor benchmark dataset was used to evaluate the enhanced CNN model's efficiency. Based on the local and context information of the MRI image, the U-Net provides good segmentation. SegNet selects the most important features for classification while also reducing the trainable parameters. In the classification of brain tumors, the enhanced CNN method outperforms the existing methods. The enhanced CNN model has an accuracy of 96.85 percent, while the existing CNN with transfer learning has an accuracy of 94.82 percent.

Classification Strategies for High Resolution Images of Korean Forests: A Case Study of Namhansansung Provincial Park, Korea

  • Park, Chong-Hwa;Choi, Sang-Il
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.708-708
    • /
    • 2002
  • Recent developments in sensor technologies have provided remotely sensed data with very high spatial resolution. In order to fully utilize the potential of high resolution images, new image classification strategies are necessary. Unfortunately, the high resolution images increase the spectral within-field variability, and the classification accuracy of traditional methods based on pixel-based classification algorithms such as Maximum-Likelihood method may be decreased (Schiewe 2001). Recent development in Object Oriented Classification based on image segmentation algorithms can be used for the classification of forest patches on rugged terrain of Korea. The objectives of this paper are as follows. First, to compare the pros and cons of image classification methods based on pixel-based and object oriented classification algorithm for the forest patch classification. Landsat ETM+ data and IKONOS data will be used for the classification. Second, to investigate ways to increase classification accuracy of forest patches. Supplemental data such as DTM and Forest Type Map of 1:25,000 scale are used for topographic correction and image segmentation. Third, to propose the best classification strategy for forest patch classification in terms of accuracy and data requirement. The research site for this paper is Namhansansung Provincial Park located at the eastern suburb of Seoul Metropolitan City for its diverse forest patch types and data availability. Both Landsat ETM+ and IKONOS data are used for the classification. Preliminary results can be summarized as follows. First, topographic correction of reflectance is essential for the classification of forest patches on rugged terrain. Second, object oriented classification of IKONOS data enables higher classification accuracy compared to Landsat ETM+ and pixel-based classification. Third, multi-stage segmentation is very useful to investigate landscape ecological aspect of forest communities of Korea.

  • PDF

텍스쳐 특징과 구조적인 정보를 이용한 문서 영상의 분할 및 분류 (Document Image Segmentation and Classification using Texture Features and Structural Information)

  • 박근혜;김보람;김욱현
    • 융합신호처리학회논문지
    • /
    • 제11권3호
    • /
    • pp.215-220
    • /
    • 2010
  • 본 논문은 문서 영상을 대상으로 표, 그림, 글자 등의 각 구성요소들을 자동으로 분류하기 위한 새로운 텍스쳐 기반의 영상 분할 및 분류 방법을 제안한다. 제안한 방법은 문서 영상 분할 단계와 문서 영상 내 구성요소 분류 단계로 이루어진다. 먼저 영상 분할을 수행한 후, 분할된 영역을 대상으로 문서 영상의 구성 요소들을 분류하는데, 이때 각 구성 요소는 서로 다른 텍스쳐를 가지고 있는 영역이라는 특징을 이용한다. 분할된 영역들을 분류하기 위한 텍스쳐 특징을 추출하기 위해 다양한 텍스쳐 분석에 광범위하게 사용되는 2차원 가보필터를 이용한다. 제안한 방법은 구성 요소와 사용 언어에 대한 사전 지식을 이용하지 않으면서 문서 영상의 분할 및 구성요소 분류에서 좋은 성능을 보인다. 제안한 방법은 멀티미디어 데이터 검색, 실시간 영상 처리 등과 같은 다양한 분야에 적용 될 수 있다.

Segmentation and Classification of Lidar data

  • Tseng, Yi-Hsing;Wang, Miao
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.153-155
    • /
    • 2003
  • Laser scanning has become a viable technique for the collection of a large amount of accurate 3D point data densely distributed on the scanned object surface. The inherent 3D nature of the sub-randomly distributed point cloud provides abundant spatial information. To explore valuable spatial information from laser scanned data becomes an active research topic, for instance extracting digital elevation model, building models, and vegetation volumes. The sub-randomly distributed point cloud should be segmented and classified before the extraction of spatial information. This paper investigates some exist segmentation methods, and then proposes an octree-based split-and-merge segmentation method to divide lidar data into clusters belonging to 3D planes. Therefore, the classification of lidar data can be performed based on the derived attributes of extracted 3D planes. The test results of both ground and airborne lidar data show the potential of applying this method to extract spatial features from lidar data.

  • PDF

영상수준과 픽셀수준 분류를 결합한 영상 의미분할 (Semantic Image Segmentation Combining Image-level and Pixel-level Classification)

  • 김선국;이칠우
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.