• Title/Summary/Keyword: gray-level co-occurrence matrix

Search Result 75, Processing Time 0.03 seconds

Forensic Image Classification using Data Mining Decision Tree (데이터 마이닝 결정나무를 이용한 포렌식 영상의 분류)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.49-55
    • /
    • 2016
  • In digital forensic images, there is a serious problem that is distributed with various image types. For the problem solution, this paper proposes a classification algorithm of the forensic image types. The proposed algorithm extracts the 21-dim. feature vector with the contrast and energy from GLCM (Gray Level Co-occurrence Matrix), and the entropy of each image type. The classification test of the forensic images is performed with an exhaustive combination of the image types. Through the experiments, TP (True Positive) and FN (False Negative) is detected respectively. While it is confirmed that performed class evaluation of the proposed algorithm is rated as 'Excellent(A)' because of the AUROC (Area Under Receiver Operating Characteristic Curve) is 0.9980 by the sensitivity and the 1-specificity. Also, the minimum average decision error is 0.1349. Also, at the minimum average decision error is 0.0179, the whole forensic image types which are involved then, our classification effectiveness is high.

WAVELET-BASED FOREST AREAS CLASSIFICATION BY USING HIGH RESOLUTION IMAGERY

  • Yoon Bo-Yeol;Kim Choen
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.698-701
    • /
    • 2005
  • This paper examines that is extracted certain information in forest areas within high resolution imagery based on wavelet transformation. First of all, study areas are selected one more species distributed spots refer to forest type map. Next, study area is cut 256 x 256 pixels size because of image processing problem in large volume data. Prior to wavelet transformation, five texture parameters (contrast, dissimilarity, entropy, homogeneity, Angular Second Moment (ASM≫ calculated by using Gray Level Co-occurrence Matrix (GLCM). Five texture images are set that shifting window size is 3x3, distance .is 1 pixel, and angle is 45 degrees used. Wavelet function is selected Daubechies 4 wavelet basis functions. Result is summarized 3 points; First, Wavelet transformation images derived from contrast, dissimilarity (texture parameters) have on effect on edge elements detection and will have probability used forest road detection. Second, Wavelet fusion images derived from texture parameters and original image can apply to forest area classification because of clustering in Homogeneous forest type structure. Third, for grading evaluation in forest fire damaged area, if data fusion of established classification method, GLCM texture extraction concept and wavelet transformation technique effectively applied forest areas (also other areas), will obtain high accuracy result.

  • PDF

Image Analysis of Computer Aided Diagnosis using Gray Level Co-occurrence Matrix in the Ultrasonography for BPH (전립선비대증 초음파 영상에서 GLCM을 이용한 컴퓨터보조진단의 영상분석)

  • Cho, Jin-Young;Kim, Chang-Soo;Kang, Se-Sik;Ko, Seong-Jin;Ye, Soo-Young
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2015.05a
    • /
    • pp.191-192
    • /
    • 2015
  • 전립선비대증(Benign Prostatic Hyperplasia, BPH)은 전립선조직중에 이행구역의 결절성증식과 요도 주위의 과증식(Hyperplasia)이 특징이다. 경직장초음파(TRUS: transrectal ultrasonography)검사를 이용한 진단에 있어 정상조직과 비대되어 있는 조직의 영상 차이를 비교하고 수량화로 나타내었다, 영상분석에는 GLCM 통계적 파라미터 중에서 Autocorrelation, Cluster Prominence, Entropy, Sum average를 4개의 파라미터에서 병변 인식이 가능하였고 인식 효율은 92-98%가 나왔다. 전립선비대증식에 대한 초음파영상을 가지고 컴퓨터영상처리분석을 제안하여 진단시 참고 자료가 될 것으로 기대한다.

  • PDF

Implementation of the System Converting Image into Music Signals based on Intentional Synesthesia (의도적인 공감각 기반 영상-음악 변환 시스템 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.254-259
    • /
    • 2020
  • This paper is the implementation of the conversion system from image to music based on intentional synesthesia. The input image based on color, texture, and shape was converted into melodies, harmonies and rhythms of music, respectively. Depending on the histogram of colors, the melody can be selected and obtained probabilistically to form the melody. The texture in the image expressed harmony and minor key with 7 characteristics of GLCM, a statistical texture feature extraction method. Finally, the shape of the image was extracted from the edge image, and using Hough Transform, a frequency component analysis, the line components were detected to produce music by selecting the rhythm according to the distribution of angles.

Classification of Livestock Diseases Using GLCM and Artificial Neural Networks

  • Choi, Dong-Oun;Huan, Meng;Kang, Yun-Jeong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.173-180
    • /
    • 2022
  • In the naked eye observation, the health of livestock can be controlled by the range of activity, temperature, pulse, cough, snot, eye excrement, ears and feces. In order to confirm the health of livestock, this paper uses calf face image data to classify the health status by image shape, color and texture. A series of images that have been processed in advance and can judge the health status of calves were used in the study, including 177 images of normal calves and 130 images of abnormal calves. We used GLCM calculation and Convolutional Neural Networks to extract 6 texture attributes of GLCM from the dataset containing the health status of calves by detecting the image of calves and learning the composite image of Convolutional Neural Networks. In the research, the classification ability of GLCM-CNN shows a classification rate of 91.3%, and the subsequent research will be further applied to the texture attributes of GLCM. It is hoped that this study can help us master the health status of livestock that cannot be observed by the naked eye.

Determination of Absorbed Dose for Gafchromic EBT3 Film Using Texture Analysis of Scanning Electron Microscopy Images: A Feasibility Study

  • So-Yeon Park
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.158-163
    • /
    • 2022
  • Purpose: We subjected scanning electron microscopic (SEM) images of the active layer of EBT3 film to texture analysis to determine the dose-response curve. Methods: Uncoated Gafchromic EBT3 films were prepared for direct surface SEM scanning. Absorbed doses of 0-20 Gy were delivered to the film's surface using a 6 MV TrueBeam STx photon beam. The film's surface was scanned using a SEM under 100× and 3,000× magnification. Four textural features (Homogeneity, Correlation, Contrast, and Energy) were calculated based on the gray level co-occurrence matrix (GLCM) using the SEM images corresponding to each dose. We used R-square to evaluate the linear relationship between delivered doses and textural features of the film's surface. Results: Correlation resulted in higher linearity and dose-response curve sensitivity than Homogeneity, Contrast, or Energy. The R-square value was 0.964 for correlation using 3,000× magnified SEM images with 9-pixel offsets. Dose verification was used to determine the difference between the prescribed and measured doses for 0, 5, 10, 15, and 20 Gy as 0.09, 1.96, -2.29, 0.17, and 0.08 Gy, respectively. Conclusions: Texture analysis can be used to accurately convert microscopic structural changes to the EBT3 film's surface into absorbed doses. Our proposed method is feasible and may improve the accuracy of film dosimetry used to protect patients from excess radiation exposure.

Accuracy Assessment of Forest Degradation Detection in Semantic Segmentation based Deep Learning Models with Time-series Satellite Imagery

  • Woo-Dam Sim;Jung-Soo Lee
    • Journal of Forest and Environmental Science
    • /
    • v.40 no.1
    • /
    • pp.15-23
    • /
    • 2024
  • This research aimed to assess the possibility of detecting forest degradation using time-series satellite imagery and three different deep learning-based change detection techniques. The dataset used for the deep learning models was composed of two sets, one based on surface reflectance (SR) spectral information from satellite imagery, combined with Texture Information (GLCM; Gray-Level Co-occurrence Matrix) and terrain information. The deep learning models employed for land cover change detection included image differencing using the Unet semantic segmentation model, multi-encoder Unet model, and multi-encoder Unet++ model. The study found that there was no significant difference in accuracy between the deep learning models for forest degradation detection. Both training and validation accuracies were approx-imately 89% and 92%, respectively. Among the three deep learning models, the multi-encoder Unet model showed the most efficient analysis time and comparable accuracy. Moreover, models that incorporated both texture and gradient information in addition to spectral information were found to have a higher classification accuracy compared to models that used only spectral information. Overall, the accuracy of forest degradation extraction was outstanding, achieving 98%.

Copyright Protection for Fire Video Images using an Effective Watermarking Method (효과적인 워터마킹 기법을 사용한 화재 비디오 영상의 저작권 보호)

  • Nguyen, Truc;Kim, Jong-Myon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.579-588
    • /
    • 2013
  • This paper proposes an effective watermarking approach for copyright protection of fire video images. The proposed watermarking approach efficiently utilizes the inherent characteristics of fire data with respect to color and texture by using a gray level co-occurrence matrix (GLCM) and fuzzy c-means (FCM) clustering. GLCM is used to generate a texture feature dataset by computing energy and homogeneity properties for each candidate fire image block. FCM is used to segment color of the fire image and to select fire texture blocks for embedding watermarks. Each selected block is then decomposed into a one-level wavelet structure with four subbands [LL, LH, HL, HH] using a discrete wavelet transform (DWT), and LH subband coefficients with a gain factor are selected for embedding watermark, where the visibility of the image does not affect. Experimental results show that the proposed watermarking approach achieves about 48 dB of high peak-signal-to-noise ratio (PSNR) and 1.6 to 2.0 of low M-singular value decomposition (M-SVD) values. In addition, the proposed approach outperforms conventional image watermarking approach in terms of normalized correlation (NC) values against several image processing attacks including noise addition, filtering, cropping, and JPEG compression.

Counterfeit Money Detection Algorithm using Non-Local Mean Value and Support Vector Machine Classifier (비지역적 특징값과 서포트 벡터 머신 분류기를 이용한 위변조 지폐 판별 알고리즘)

  • Ji, Sang-Keun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.55-64
    • /
    • 2013
  • Due to the popularization of digital high-performance capturing equipments and the emergence of powerful image-editing softwares, it is easy for anyone to make a high-quality counterfeit money. However, the probability of detecting a counterfeit money to the general public is extremely low. In this paper, we propose a counterfeit money detection algorithm using a general purpose scanner. This algorithm determines counterfeit money based on the different features in the printing process. After the non-local mean value is used to analyze the noises from each money, we extract statistical features from these noises by calculating a gray level co-occurrence matrix. Then, these features are applied to train and test the support vector machine classifier for identifying either original or counterfeit money. In the experiment, we use total 324 images of original money and counterfeit money. Also, we compare with noise features from previous researches using wiener filter and discrete wavelet transform. The accuracy of the algorithm for identifying counterfeit money was over 94%. Also, the accuracy for identifying the printing source was over 93%. The presented algorithm performs better than previous researches.

A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

  • Xu, Yi;Chen, Quansheng;Liu, Yan;Sun, Xin;Huang, Qiping;Ouyang, Qin;Zhao, Jiewen
    • Food Science of Animal Resources
    • /
    • v.38 no.2
    • /
    • pp.362-375
    • /
    • 2018
  • This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.