• Title/Summary/Keyword: Gray-Level Co-Occurrence Matrix (GLCM)

Search Result 57, Processing Time 0.029 seconds

Camera Model Identification Based on Deep Learning (딥러닝 기반 카메라 모델 판별)

  • Lee, Soo Hyeon;Kim, Dong Hyun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.411-420
    • /
    • 2019
  • Camera model identification has been a subject of steady study in the field of digital forensics. Among the increasingly sophisticated crimes, crimes such as illegal filming are taking up a high number of crimes because they are hard to detect as cameras become smaller. Therefore, technology that can specify which camera a particular image was taken on could be used as evidence to prove a criminal's suspicion when a criminal denies his or her criminal behavior. This paper proposes a deep learning model to identify the camera model used to acquire the image. The proposed model consists of four convolution layers and two fully connection layers, and a high pass filter is used as a filter for data pre-processing. To verify the performance of the proposed model, Dresden Image Database was used and the dataset was generated by applying the sequential partition method. To show the performance of the proposed model, it is compared with existing studies using 3 layers model or model with GLCM. The proposed model achieves 98% accuracy which is similar to that of the latest technology.

Bone Microarchitecture at the Femoral Attachment of the Posterior Cruciate Ligament (PCL) by Texture Analysis of Magnetic Resonance Imaging (MRI) in Patients with PCL Injury: an Indirect Reflection of Ligament Integrity

  • Kim, Hwan;Shin, YiRang;Kim, Sung-Hwan;Lee, Young Han
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.2
    • /
    • pp.93-100
    • /
    • 2021
  • Purpose: (1) To evaluate the trabecular pattern at the femoral attachment of the posterior cruciate ligament (PCL) in patients with a PCL injury; (2) to analyze bone microarchitecture by applying gray level co-occurrence matrix (GLCM)-based texture analysis; and (3) to determine if there is a significant relationship between bone microarchitecture and posterior instability. Materials and Methods: The study included 96 patients with PCL tears. Trabecular patterns were evaluated on T2-weighted MRI qualitatively, and were evaluated by GLCM texture analysis quantitatively. The grades of posterior drawer test (PDT) and the degrees of posterior displacement on stress radiographs were recorded. The 96 patients were classified into two groups: acute and chronic injury. And 27 patients with no PCL injury were enrolled for control. Pearson's correlation coefficient and one-way ANOVA with Bonferroni test were conducted for statistical analyses. This protocol was approved by the Institutional Review Board. Results: A thick and anisotropic trabecular bone pattern was apparent in normal or acute injury (n = 57/61;93.4%), but was not prominent in chronic injury and posterior instability (n = 31/35;88.6%). Grades of PDT and degrees of posterior displacement on stress radiograph were not correlated with texture parameters. However, the texture analysis parameters of chronic injury were significantly different from those of acute injury and control groups (P < 0.05). Conclusion: The trabecular pattern and texture analysis parameters are useful in predicting posterior instability in patients with PCL injury. Evaluation of the bone microarchitecture resulting from altered biomechanics could advance the understanding of PCL function and improve the detection of PCL injury.

Image Analysis of Computer Aided Diagnosis using Gray Level Co-occurrence Matrix in the Ultrasonography for Benign Prostate Hyperplasia (전립선비대증 초음파 영상에서 GLCM을 이용한 컴퓨터보조진단의 영상분석)

  • Cho, Jin-Young;Kim, Chang-Soo;Kang, Se-Sik;Ko, Seong-Jin;Ye, Soo-Young
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.3
    • /
    • pp.184-191
    • /
    • 2015
  • Prostate ultrasound is used to diagnose prostate cancer, BPH, prostatitis and biopsy of prostate cancer to determine the size of prostate. BPH is one of the common disease in elderly men. Prostate is divided into 4 blocks, peripheral zone, central zone, transition zone, anterior fibromuscular stroma. BPH is histologically transition zone urethra accompanying excessive nodular hyperplasia causes a lower urinary tract symptoms(LUTS) caused by urethral closure as causing the hyperplastic nodule characterized finding progressive ambient. Therefore, in this study normal transition zone image for hyperplasia prostate and normal transition zone image is analyzed quantitatively using a computer algorithm. We applied texture features of GLCM to set normal tissue 60 cases and BPH tissue 60cases setting analysis area $50{\times}50pixels$ which was analyzed by comparing the six parameters for each partial image. Consequently, Disease recognition detection efficiency of Autocorrelation, Cluster prominence, entropy, Sum average, parameter were high as 92~98%.This could be confirmed by quantitative image analysis to nodular hyperplasia change transition zone of the prostate. This is expected secondary means to diagnose BPH and the data base will be considered in various prostate examination.

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.

A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

  • Xu, Yi;Chen, Quansheng;Liu, Yan;Sun, Xin;Huang, Qiping;Ouyang, Qin;Zhao, Jiewen
    • Food Science of Animal Resources
    • /
    • v.38 no.2
    • /
    • pp.362-375
    • /
    • 2018
  • This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Classification of Breast Tumor Cell Tissue Section Images (유방 종양 세포 조직 영상의 분류)

  • 황해길;최현주;윤혜경;남상희;최흥국
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.22-30
    • /
    • 2001
  • In this paper we propose three classification algorithms to classify breast tumors that occur in duct into Benign, DCIS(ductal carcinoma in situ) NOS(invasive ductal carcinoma) The general approach for a creating classifier is composed of 2 steps: feature extraction and classification Above all feature extraction for a good classifier is very significance, because the classification performance depends on the extracted features, Therefore in the feature extraction step, we extracted morphology features describing the size of nuclei and texture features The internal structures of the tumor are reflected from wavelet transformed images with 10$\times$ and 40$\times$ magnification. Pariticulary to find the correlation between correct classification rates and wavelet depths we applied 1, 2, 3 and 4-level wavelet transforms to the images and extracted texture feature from the transformed images The morphology features used are area, perimeter, width of X axis width of Y axis and circularity The texture features used are entropy energy contrast and homogeneity. In the classification step, we created three classifiers from each of extracted features using discriminant analysis The first classifier was made by morphology features. The second and the third classifiers were made by texture features of wavelet transformed images with 10$\times$ and 40$\times$ magnification. Finally we analyzed and compared the correct classification rate of the three classifiers. In this study, we found that the best classifier was made by texture features of 3-level wavelet transformed images.

  • PDF