• Title/Summary/Keyword: gray-level co-occurrence matrix

Search Result 75, Processing Time 0.019 seconds

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Classification of Breast Tumor Cell Tissue Section Images (유방 종양 세포 조직 영상의 분류)

  • 황해길;최현주;윤혜경;남상희;최흥국
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.22-30
    • /
    • 2001
  • In this paper we propose three classification algorithms to classify breast tumors that occur in duct into Benign, DCIS(ductal carcinoma in situ) NOS(invasive ductal carcinoma) The general approach for a creating classifier is composed of 2 steps: feature extraction and classification Above all feature extraction for a good classifier is very significance, because the classification performance depends on the extracted features, Therefore in the feature extraction step, we extracted morphology features describing the size of nuclei and texture features The internal structures of the tumor are reflected from wavelet transformed images with 10$\times$ and 40$\times$ magnification. Pariticulary to find the correlation between correct classification rates and wavelet depths we applied 1, 2, 3 and 4-level wavelet transforms to the images and extracted texture feature from the transformed images The morphology features used are area, perimeter, width of X axis width of Y axis and circularity The texture features used are entropy energy contrast and homogeneity. In the classification step, we created three classifiers from each of extracted features using discriminant analysis The first classifier was made by morphology features. The second and the third classifiers were made by texture features of wavelet transformed images with 10$\times$ and 40$\times$ magnification. Finally we analyzed and compared the correct classification rate of the three classifiers. In this study, we found that the best classifier was made by texture features of 3-level wavelet transformed images.

  • PDF

Image Analysis of Computer Aided Diagnosis using Gray Level Co-occurrence Matrix in the Ultrasonography for Benign Prostate Hyperplasia (전립선비대증 초음파 영상에서 GLCM을 이용한 컴퓨터보조진단의 영상분석)

  • Cho, Jin-Young;Kim, Chang-Soo;Kang, Se-Sik;Ko, Seong-Jin;Ye, Soo-Young
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.3
    • /
    • pp.184-191
    • /
    • 2015
  • Prostate ultrasound is used to diagnose prostate cancer, BPH, prostatitis and biopsy of prostate cancer to determine the size of prostate. BPH is one of the common disease in elderly men. Prostate is divided into 4 blocks, peripheral zone, central zone, transition zone, anterior fibromuscular stroma. BPH is histologically transition zone urethra accompanying excessive nodular hyperplasia causes a lower urinary tract symptoms(LUTS) caused by urethral closure as causing the hyperplastic nodule characterized finding progressive ambient. Therefore, in this study normal transition zone image for hyperplasia prostate and normal transition zone image is analyzed quantitatively using a computer algorithm. We applied texture features of GLCM to set normal tissue 60 cases and BPH tissue 60cases setting analysis area $50{\times}50pixels$ which was analyzed by comparing the six parameters for each partial image. Consequently, Disease recognition detection efficiency of Autocorrelation, Cluster prominence, entropy, Sum average, parameter were high as 92~98%.This could be confirmed by quantitative image analysis to nodular hyperplasia change transition zone of the prostate. This is expected secondary means to diagnose BPH and the data base will be considered in various prostate examination.

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.