• Title/Summary/Keyword: gray-level co-occurrence matrix

Search Result 75, Processing Time 0.028 seconds

Cone-beam computed tomography texture analysis can help differentiate odontogenic and non-odontogenic maxillary sinusitis

  • Andre Luiz Ferreira Costa;Karolina Aparecida Castilho Fardim;Isabela Teixeira Ribeiro;Maria Aparecida Neves Jardini;Paulo Henrique Braz-Silva;Kaan Orhan;Sergio Lucio Pereira de Castro Lopes
    • Imaging Science in Dentistry
    • /
    • v.53 no.1
    • /
    • pp.43-51
    • /
    • 2023
  • Purpose: This study aimed to assess texture analysis(TA) of cone-beam computed tomography (CBCT) images as a quantitative tool for the differential diagnosis of odontogenic and non-odontogenic maxillary sinusitis(OS and NOS, respectively). Materials and Methods: CBCT images of 40 patients diagnosed with OS (N=20) and NOS (N=20) were evaluated. The gray level co-occurrence (GLCM) matrix parameters, and gray level run length matrix texture (GLRLM) parameters were extracted using manually placed regions of interest on lesion images. Seven texture parameters were calculated using GLCM and 4 parameters using GLRLM. The Mann-Whitney test was used for comparisons between the groups, and the Levene test was performed to confirm the homogeneity of variance (α=5%). Results: The results showed statistically significant differences(P<0.05) between the OS and NOS patients regarding 3 TA parameters. NOS patients presented higher values for contrast, while OS patients presented higher values for correlation and inverse difference moment. Greater textural homogeneity was observed in the OS patients than in the NOS patients, with statistically significant differences in standard deviations between the groups for correlation, sum of squares, sum of entropy, and entropy. Conclusion: TA enabled quantitative differentiation between OS and NOS on CBCT images by using the parameters of contrast, correlation, and inverse difference moment.

Counterfeit Money Detection Algorithm based on Morphological Features of Color Printed Images and Supervised Learning Model Classifier (컬러 프린터 영상의 모폴로지 특징과 지도 학습 모델 분류기를 활용한 위변조 지폐 판별 알고리즘)

  • Woo, Qui-Hee;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.12
    • /
    • pp.889-898
    • /
    • 2013
  • Due to the popularization of high-performance capturing equipments and the emergence of powerful image-editing softwares, it is easy to make high-quality counterfeit money. However, the probability of detecting counterfeit money to the general public is extremely low and the detection device is expensive. In this paper, a counterfeit money detection algorithm using a general purpose scanner and computer system is proposed. First, the printing features of color printers are calculated using morphological operations and gray-level co-occurrence matrix. Then, these features are used to train a support vector machine classifier. This trained classifier is applied for identifying either original or counterfeit money. In the experiment, we measured the detection rate between the original and counterfeit money. Also, the printing source was identified. The proposed algorithm was compared with the algorithm using wiener filter to identify color printing source. The accuracy for identifying counterfeit money was 91.92%. The accuracy for identifying the printing source was over 94.5%. The results support that the proposed algorithm performs better than previous researches.

Changes Detection of Ice Dimension in Cheonji, Baekdu Mountain Using Sentinel-1 Image Classification (Sentinel-1 위성의 영상 분류 기법을 이용한 백두산 천지의 얼음 면적 변화 탐지)

  • Park, Sungjae;Eom, Jinah;Ko, Bokyun;Park, Jeong-Won;Lee, Chang-Wook
    • Journal of the Korean earth science society
    • /
    • v.41 no.1
    • /
    • pp.31-39
    • /
    • 2020
  • Cheonji, the largest caldera lake in Asia, is located at the summit of Baekdu Mountain. Cheonji is covered with snow and ice for about six months of the year due to its high altitude and its surrounding environment. Since most of the sources of water are from groundwater, the water temperature is closely related to the volcanic activity. However, in the 2000s, many volcanic activities have been monitored on the mountain. In this study, we analyzed the dimension of ice produced during winter in Baekdu Mountain using Sentinel-1 satellite image data provided by the European Space Agency (ESA). In order to calculate the dimension of ice from the backscatter image of the Sentinel-1 satellite, 20 Gray-Level Co-occurrence Matrix (GLCM) layers were generated from two polarization images using texture analysis. The method used in calculating the area was utilized with the Support Vector Machine (SVM) algorithm to classify the GLCM layer which is to calculate the dimension of ice in the image. Also, the calculated area was correlated with temperature data obtained from Samjiyeon weather station. This study could be used as a basis for suggesting an alternative to the new method of calculating the area of ice before using a long-term time series analysis on a full scale.

Study on evaluating the significance of 3D nuclear texture features for diagnosis of cervical cancer (자궁경부암 진단을 위한 3차원 세포핵 질감 특성값 유의성 평가에 관한 연구)

  • Choi, Hyun-Ju;Kim, Tae-Yun;Malm, Patrik;Bengtsson, Ewert;Choi, Heung-Kook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.83-92
    • /
    • 2011
  • The aim of this study is to evaluate whether 3D nuclear chromatin texture features are significant in recognizing the progression of cervical cancer. In particular, we assessed that our method could detect subtle differences in the chromatin pattern of seemingly normal cells on specimens with malignancy. We extracted nuclear texture features based on 3D GLCM(Gray Level Co occurrence Matrix) and 3D Wavelet transform from 100 cell volume data for each group (Normal, LSIL and HSIL). To evaluate the feasibility of 3D chromatin texture analysis, we compared the correct classification rate for each of the classifiers using them. In addition to this, we compared the correct classification rates for the classifiers using the proposed 3D nuclear texture features and the 2D nuclear texture features which were extracted in the same way. The results showed that the classifier using the 3D nuclear texture features provided better results. This means our method could improve the accuracy and reproducibility of quantification of cervical cell.

A Study on the Detection and Statistical Feature Analysis of Red Tide Area in South Coast Using Remote Sensing (원격탐사를 이용한 남해안의 적조영역 검출과 통계적 특징 분석에 관한 연구)

  • Sur, Hyung-Soo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.65-70
    • /
    • 2007
  • Red tide is becoming hot issue of environmental problem worldwide since the 1990. Advanced nations, are progressing study that detect red tide area on early time using satellite for sea. But, our country most seashores bends serious. Also because there are a lot of turbid method streams on coast, hard to detect small red tide area by satellite for sea that is low resolution. Also, method by sea color that use one feature of satellite image for sea of existent red tide area detection was most. In this way, have a few feature in image with sea color and it can cause false negative mistake that detect red tide area. Therefore, in this paper, acquired texture information to use GLCM(Gray Level Co occurrence Matrix)'s texture 6 information about high definition land satellite south Coast image. Removed needless component reducing dimension through principal component analysis from this information. And changed into 2 principal component accumulation images, Experiment result 2 principal component conversion accumulation image's eigenvalues were 94.6%. When component with red tide area that uses only sea color image and all principal component image. displayed more correct result. And divided as quantitative,, it compares with turbid stream and the sea that red tide does not exist using statistical feature analysis about texture.

Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery (UAV와 다시기 위성영상을 이용한 붕괴건물 탐지)

  • Jung, Sejung;Lee, Kirim;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.187-196
    • /
    • 2020
  • In this study, collapsed building detection using UAV (Unmanned Aerial Vehicle) and PlanetScope satellite images was carried out, suggesting the possibility of utilization of heterogeneous sensors in object detection located on the surface. To this end, the area where about 20 buildings collapsed due to forest fire damage was selected as study site. First of all, the feature information of objects such as ExG (Excess Green), GLCM (Gray-Level Co-Occurrence Matrix), and DSM (Digital Surface Model) were generated using high-resolution UAV images performed object-based segmentation to detect collapsed buildings. The features were then used to detect candidates for collapsed buildings. In this process, a result of the change detection using PlanetScope were used together to improve detection accuracy. More specifically, the changed pixels acquired by the bitemporal PlanetScope images were used as seed pixels to correct the misdetected and overdetected areas in the candidate group of collapsed buildings. The accuracy of the detection results of collapse buildings using only UAV image and the accuracy of collapse building detection result when UAV and PlanetScope images were used together were analyzed through the manually dizitized reference image. As a result, the results using only UAV image had 0.4867 F1-score, and the results using UAV and PlanetScope images together showed that the value improved to 0.8064 F1-score. Moreover, the Kappa coefficiant value was also dramatically improved from 0.3674 to 0.8225.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

A Novel System for Detecting Adult Images on the Internet

  • Park, Jae-Yong;Park, Sang-Sung;Shin, Young-Geun;Jang, Dong-Sik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.5
    • /
    • pp.910-924
    • /
    • 2010
  • As Internet usage has increased, the risk of adolescents being exposed to adult content and harmful information on the Internet has also risen. To help prevent adolescents accessing this content, a novel detection method for adult images is proposed. The proposed method involves three steps. First, the Image Of Interest (IOI) is extracted from the image background. Second, the IOI is distinguished from the segmented image using a novel weighting mask, and it is determined to be acceptable or unacceptable. Finally, the features (color and texture) of the IOI or original image are compared to a critical value; if they exceed that value then the image is deemed to be an adult image. A Receiver Operating Characteristic (ROC) curve analysis was performed to define this optimal critical value. And, the textural features are identified using a gray level co-occurrence matrix. The proposed method increased the precision level of detection by applying a novel weighting mask and a receiver operating characteristic curve. To demonstrate the effectiveness of the proposed method, 2850 adult and non-adult images were used for experimentation.

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

Depth-based Correction of Side Scan Sonal Image Data and Segmentation for Seafloor Classification (수심을 고려한 사이드 스캔 소나 자료의 보정 및 해저면 분류를 위한 영상분할)

  • 서상일;김학일;이광훈;김대철
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.133-150
    • /
    • 1997
  • The purpose of this paper is to develop an algorithm of classification and interpretation of seafloor based on side scan sonar data. The algorithm consists of mosaicking of sonar data using navigation data, correction and compensation of the acouctic amplitude data considering the charateristics of the side scan sonar system, and segmentation of the seafloor using digital image processing techniques. The correction and compensation process is essential because there is usually difference in acoustic amplitudes from the same distance of the port-side and the starboard-side and the amplitudes become attenuated as the distance is increasing. In this paper, proposed is an algorithm of compensating the side scan sonar data, and its result is compared with the mosaicking result without any compensation. The algorithm considers the amplitude characteristics according to the tow-fish's depth as well as the attenuation trend of the side scan sonar along the beam positions. This paper also proposes an image segmentation algorithm based on the texture, where the criterion is the maximum occurence related with gray level. The preliminary experiment has been carried out with the side scan sonar data and its result is demonstrated.