• Title/Summary/Keyword: Gray Level Co-occurrence Matrix (GLCM)

Search Result 57, Processing Time 0.029 seconds

A Calf Disease Decision Support Model (송아지 질병 결정 지원 모델)

  • Choi, Dong-Oun;Kang, Yun-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1462-1468
    • /
    • 2022
  • Among the data used for the diagnosis of calf disease, feces play an important role in disease diagnosis. In the image of calf feces, the health status can be known by the shape, color, and texture. For the fecal image that can identify the health status, data of 207 normal calves and 158 calves with diarrhea were pre-processed according to fecal status and used. In this paper, images of fecal variables are detected among the collected calf data and images are trained by applying GLCM-CNN, which combines the properties of CNN and GLCM, on a dataset containing disease symptoms using convolutional network technology. There was a significant difference between CNN's 89.9% accuracy and GLCM-CNN, which showed 91.7% accuracy, and GLCM-CNN showed a high accuracy of 1.8%.

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Cone-beam computed tomography texture analysis can help differentiate odontogenic and non-odontogenic maxillary sinusitis

  • Andre Luiz Ferreira Costa;Karolina Aparecida Castilho Fardim;Isabela Teixeira Ribeiro;Maria Aparecida Neves Jardini;Paulo Henrique Braz-Silva;Kaan Orhan;Sergio Lucio Pereira de Castro Lopes
    • Imaging Science in Dentistry
    • /
    • v.53 no.1
    • /
    • pp.43-51
    • /
    • 2023
  • Purpose: This study aimed to assess texture analysis(TA) of cone-beam computed tomography (CBCT) images as a quantitative tool for the differential diagnosis of odontogenic and non-odontogenic maxillary sinusitis(OS and NOS, respectively). Materials and Methods: CBCT images of 40 patients diagnosed with OS (N=20) and NOS (N=20) were evaluated. The gray level co-occurrence (GLCM) matrix parameters, and gray level run length matrix texture (GLRLM) parameters were extracted using manually placed regions of interest on lesion images. Seven texture parameters were calculated using GLCM and 4 parameters using GLRLM. The Mann-Whitney test was used for comparisons between the groups, and the Levene test was performed to confirm the homogeneity of variance (α=5%). Results: The results showed statistically significant differences(P<0.05) between the OS and NOS patients regarding 3 TA parameters. NOS patients presented higher values for contrast, while OS patients presented higher values for correlation and inverse difference moment. Greater textural homogeneity was observed in the OS patients than in the NOS patients, with statistically significant differences in standard deviations between the groups for correlation, sum of squares, sum of entropy, and entropy. Conclusion: TA enabled quantitative differentiation between OS and NOS on CBCT images by using the parameters of contrast, correlation, and inverse difference moment.

A Study on the Detection and Statistical Feature Analysis of Red Tide Area in South Coast Using Remote Sensing (원격탐사를 이용한 남해안의 적조영역 검출과 통계적 특징 분석에 관한 연구)

  • Sur, Hyung-Soo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.65-70
    • /
    • 2007
  • Red tide is becoming hot issue of environmental problem worldwide since the 1990. Advanced nations, are progressing study that detect red tide area on early time using satellite for sea. But, our country most seashores bends serious. Also because there are a lot of turbid method streams on coast, hard to detect small red tide area by satellite for sea that is low resolution. Also, method by sea color that use one feature of satellite image for sea of existent red tide area detection was most. In this way, have a few feature in image with sea color and it can cause false negative mistake that detect red tide area. Therefore, in this paper, acquired texture information to use GLCM(Gray Level Co occurrence Matrix)'s texture 6 information about high definition land satellite south Coast image. Removed needless component reducing dimension through principal component analysis from this information. And changed into 2 principal component accumulation images, Experiment result 2 principal component conversion accumulation image's eigenvalues were 94.6%. When component with red tide area that uses only sea color image and all principal component image. displayed more correct result. And divided as quantitative,, it compares with turbid stream and the sea that red tide does not exist using statistical feature analysis about texture.

Liver Tumor Detection Using Texture PCA of CT Images (CT영상의 텍스처 주성분 분석을 이용한 간종양 검출)

  • Sur, Hyung-Soo;Chong, Min-Young;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.601-606
    • /
    • 2006
  • The image data amount that used in medical institution with great development of medical technology is increasing rapidly. Therefore, people need automation method that use image processing description than macrography of doctors for analysis many medical image. In this paper. we propose that acquire texture information to using GLCM about liver area of abdomen CT image, and automatically detects liver tumor using PCA from this data. Method by one feature as intensity of existent liver humor detection was most but we changed into 4 principal component accumulation images using GLCM's texture information 8 feature. Experiment result, 4 principal component accumulation image's variance percentage is 89.9%. It was seen this compare with liver tumor detecting that use only intensity about 92%. This means that can detect liver tumor even if reduce from dimension of image data to 4 dimensions that is the half in 8 dimensions.

Ultrasound Image Classification of Diffuse Thyroid Disease using GLCM and Artificial Neural Network (GLCM과 인공신경망을 이용한 미만성 갑상샘 질환 초음파 영상 분류)

  • Eom, Sang-Hee;Nam, Jae-Hyun;Ye, Soo-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.956-962
    • /
    • 2022
  • Diffuse thyroid disease has ambiguous diagnostic criteria and many errors occur according to the subjective diagnosis of skilled practitioners. If image processing technology is applied to ultrasound images, quantitative data is extracted, and applied to a computer auxiliary diagnostic system, more accurate and political diagnosis is possible. In this paper, 19 parameters were extracted by applying the Gray level co-occurrence matrix (GLCM) algorithm to ultrasound images classified as normal, mild, and moderate in patients with thyroid disease. Using these parameters, an artificial neural network (ANN) was applied to analyze diffuse thyroid ultrasound images. The final classification rate using ANN was 96.9%. Using the results of the study, it is expected that errors caused by visual reading in the diagnosis of thyroid diseases can be reduced and used as a secondary means of diagnosing diffuse thyroid diseases.

Image Analysis of Computer Aided Diagnosis using Gray Level Co-occurrence Matrix in the Ultrasonography for BPH (전립선비대증 초음파 영상에서 GLCM을 이용한 컴퓨터보조진단의 영상분석)

  • Cho, Jin-Young;Kim, Chang-Soo;Kang, Se-Sik;Ko, Seong-Jin;Ye, Soo-Young
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2015.05a
    • /
    • pp.191-192
    • /
    • 2015
  • 전립선비대증(Benign Prostatic Hyperplasia, BPH)은 전립선조직중에 이행구역의 결절성증식과 요도 주위의 과증식(Hyperplasia)이 특징이다. 경직장초음파(TRUS: transrectal ultrasonography)검사를 이용한 진단에 있어 정상조직과 비대되어 있는 조직의 영상 차이를 비교하고 수량화로 나타내었다, 영상분석에는 GLCM 통계적 파라미터 중에서 Autocorrelation, Cluster Prominence, Entropy, Sum average를 4개의 파라미터에서 병변 인식이 가능하였고 인식 효율은 92-98%가 나왔다. 전립선비대증식에 대한 초음파영상을 가지고 컴퓨터영상처리분석을 제안하여 진단시 참고 자료가 될 것으로 기대한다.

  • PDF

Implementation of the System Converting Image into Music Signals based on Intentional Synesthesia (의도적인 공감각 기반 영상-음악 변환 시스템 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.254-259
    • /
    • 2020
  • This paper is the implementation of the conversion system from image to music based on intentional synesthesia. The input image based on color, texture, and shape was converted into melodies, harmonies and rhythms of music, respectively. Depending on the histogram of colors, the melody can be selected and obtained probabilistically to form the melody. The texture in the image expressed harmony and minor key with 7 characteristics of GLCM, a statistical texture feature extraction method. Finally, the shape of the image was extracted from the edge image, and using Hough Transform, a frequency component analysis, the line components were detected to produce music by selecting the rhythm according to the distribution of angles.

Accuracy Assessment of Forest Degradation Detection in Semantic Segmentation based Deep Learning Models with Time-series Satellite Imagery

  • Woo-Dam Sim;Jung-Soo Lee
    • Journal of Forest and Environmental Science
    • /
    • v.40 no.1
    • /
    • pp.15-23
    • /
    • 2024
  • This research aimed to assess the possibility of detecting forest degradation using time-series satellite imagery and three different deep learning-based change detection techniques. The dataset used for the deep learning models was composed of two sets, one based on surface reflectance (SR) spectral information from satellite imagery, combined with Texture Information (GLCM; Gray-Level Co-occurrence Matrix) and terrain information. The deep learning models employed for land cover change detection included image differencing using the Unet semantic segmentation model, multi-encoder Unet model, and multi-encoder Unet++ model. The study found that there was no significant difference in accuracy between the deep learning models for forest degradation detection. Both training and validation accuracies were approx-imately 89% and 92%, respectively. Among the three deep learning models, the multi-encoder Unet model showed the most efficient analysis time and comparable accuracy. Moreover, models that incorporated both texture and gradient information in addition to spectral information were found to have a higher classification accuracy compared to models that used only spectral information. Overall, the accuracy of forest degradation extraction was outstanding, achieving 98%.

Feature Extraction of Forest Fire by Using High Resolution Image (고해상도 위성영상을 이용한 산화피해림의 특징추출)

  • Yoon Bo-Yeol;Kim Choen
    • Proceedings of the KSRS Conference
    • /
    • 2006.03a
    • /
    • pp.275-278
    • /
    • 2006
  • 본 연구는 전정색(panchromatic) 고해상도 위성영상을 이용하여 산화피해림과 비산화림을 대상으로 수종별로 구분하여 조사하였다. 제안된 방법은 회색단계 공발생 행렬(Gray Level Co-occurrence Matrix, GLCM)을 통하여 생성된 질감 영상(textural images)과 웨이블릿 분해 영상(wavelet decomposition images)의 융합을 실시하여 질감 영상에서 추출될 수 있는 정보와 웨이블릿 분해를 통해 얻을 수 있는 정보를 획득하고자 하였다. 그 결과로 동일 수종을 형성하는 임반이나 산화피해 정도가 유사한 산림의 경우 영상의 밝기값의 분포가 일정한 범위 내에서 형성되어 수종 분류 및 산화피해 등급의 구분이 가능했으나, 영상 내 경계효과(edge effect) 현상은 일부 영상에서 나타났다.

  • PDF