• Title/Summary/Keyword: Texture Classification

Search Result 312, Processing Time 0.026 seconds

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Automated Classification of Ground-glass Nodules using GGN-Net based on Intensity, Texture, and Shape-Enhanced Images in Chest CT Images (흉부 CT 영상에서 결절의 밝기값, 재질 및 형상 증강 영상 기반의 GGN-Net을 이용한 간유리음영 결절 자동 분류)

  • Byun, So Hyun;Jung, Julip;Hong, Helen;Song, Yong Sub;Kim, Hyungjin;Park, Chang Min
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • In this paper, we propose an automated method for the ground-glass nodule(GGN) classification using GGN-Net based on intensity, texture, and shape-enhanced images in chest CT images. First, we propose the utilization of image that enhances the intensity, texture, and shape information so that the input image includes the presence and size information of the solid component in GGN. Second, we propose GGN-Net which integrates and trains feature maps obtained from various input images through multiple convolution modules on the internal network. To evaluate the classification accuracy of the proposed method, we used 90 pure GGNs, 38 part-solid GGNs less than 5mm with solid component, and 23 part-solid GGNs larger than 5mm with solid component. To evaluate the effect of input image, various input image set is composed and classification results were compared. The results showed that the proposed method using the composition of intensity, texture and shape-enhanced images showed the best result with 82.75% accuracy.

Effect of light illumination and camera moving speed on soil image quality (조명 및 카메라 이동속도가 토양 영상에 미치는 영향)

  • Chung, Sun-Ok;Cho, Ki-Hyun;Jung, Ki-Yuol
    • Korean Journal of Agricultural Science
    • /
    • v.39 no.3
    • /
    • pp.407-412
    • /
    • 2012
  • Soil texture has an important influence on agriculture such as crop selection, movement of nutrient and water, soil electrical conductivity, and crop growth. Conventionally, soil texture has been determined in the laboratory using pipette and hydrometer methods requiring significant amount of time, labor, and cost. Recently, in-situ soil texture classification systems using optical diffuse reflectometry or mechanical resistance have been reported, especially for precision agriculture that needs more data than conventional agriculture. This paper is a part of overall research to develop an in-situ soil texture classification system using image processing. Issues investigated in this study were effects of sensor travel speed and light source and intensity on image quality. When travel speed of image sensor increased from 0 to 10 mm/s, travel distance and number of pixel were increased to 3.30 mm and 9.4, respectively. This travel distances were not negligible even at a speed of 2 mm/s (i.e., 0.66 mm and 1.4), and image degradation was significant. Tests for effects of illumination intensity showed that 7 to 11 Lux seemed a good condition minimizing shade and reflection. When soil water content increased, illumination intensity should be greater to compensate decrease in brightness. Results of the paper would be useful for construction, test, and application of the sensor.

Highly Flexible Piezoelectric Tactile Sensor based on PZT/Epoxy Nanocomposite for Texture Recognition (텍스처 인지를 위한 PZT/Epoxy 나노 복합소재 기반 유연 압전 촉각센서)

  • Yulim Min;Yunjeong Kim;Jeongnam Kim;Saerom Seo;Hye Jin Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.88-94
    • /
    • 2023
  • Recently, piezoelectric tactile sensors have garnered considerable attention in the field of texture recognition owing to their high sensitivity and high-frequency detection capability. Despite their remarkable potential, improving their mechanical flexibility to attach to complex surfaces remains challenging. In this study, we present a flexible piezoelectric sensor that can be bent to an extremely small radius of up to 2.5 mm and still maintain good electrical performance. The proposed sensor was fabricated by controlling the thickness that induces internal stress under external deformation. The fabricated piezoelectric sensor exhibited a high sensitivity of 9.3 nA/kPa ranging from 0 to 10 kPa and a wide frequency range of up to 1 kHz. To demonstrate real-time texture recognition by rubbing the surface of an object with our sensor, nine sets of fabric plates were prepared to reflect their material properties and surface roughness. To extract features of the objects from the detected sensing data, we converted the analog dataset to short-term Fourier transform images. Subsequently, texture recognition was performed using a convolutional neural network with a classification accuracy of 97%.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Face Representation and Face Recognition using Optimized Local Ternary Patterns (OLTP)

  • Raja, G. Madasamy;Sadasivam, V.
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.402-410
    • /
    • 2017
  • For many years, researchers in face description area have been representing and recognizing faces based on different methods that include subspace discriminant analysis, statistical learning and non-statistics based approach etc. But still automatic face recognition remains an interesting but challenging problem. This paper presents a novel and efficient face image representation method based on Optimized Local Ternary Pattern (OLTP) texture features. The face image is divided into several regions from which the OLTP texture feature distributions are extracted and concatenated into a feature vector that can act as face descriptor. The recognition is performed using nearest neighbor classification method with Chi-square distance as a similarity measure. Extensive experimental results on Yale B, ORL and AR face databases show that OLTP consistently performs much better than other well recognized texture models for face recognition.

Efficient Text Localization using MLP-based Texture Classification (신경망 기반의 텍스춰 분석을 이용한 효율적인 문자 추출)

  • Jung, Kee-Chul;Kim, Kwang-In;Han, Jung-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.180-191
    • /
    • 2002
  • We present a new text localization method in images using a multi-layer perceptron(MLP) and a multiple continuously adaptive mean shift (MultiCAMShift) algorithm. An automatically constructed MLP-based texture classifier generates a text probability image for various types of images without an explicit feature extraction. The MultiCAMShift algorithm, which operates on the text probability Image produced by an MLP, can place bounding boxes efficiently without analyzing the texture properties of an entire image.

Texture-based PCA for Analyzing Document Image (텍스처 정보 기반의 PCA를 이용한 문서 영상의 분석)

  • Kim, Bo-Ram;Kim, Wook-Hyun
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.283-284
    • /
    • 2006
  • In this paper, we propose a novel segmentation and classification method using texture features for the document image. First, we extract the local entropy and then segment the document image to separate the background and the foreground using the Otsu's method. Finally, we classify the segmented regions into each component using PCA(principle component analysis) algorithm based on the texture features that are extracted from the co-occurrence matrix for the entropy image. The entropy-based segmentation is robust to not only noise and the change of light, but also skew and rotation. Texture features are not restricted from any form of the document image and have a superior discrimination for each component. In addition, PCA algorithm used for the classifier can classify the components more robustly than neural network.

  • PDF

Seafloor Classification Based on the Texture Analysis of Sonar Images Using the Gabor Wavelet

  • Sun, Ning;Shim, Tae-Bo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.3E
    • /
    • pp.77-83
    • /
    • 2008
  • In the process of the sonar image textures produced, the orientation and scale factors are very significant. However, most of the related methods ignore the directional information and scale invariance or just pay attention to one of them. To overcome this problem, we apply Gabor wavelet to extract the features of sonar images, which combine the advantages of both the Gabor filter and traditional wavelet function. The mother wavelet is designed with constrained parameters and the optimal parameters will be selected at each orientation, with the help of bandwidth parameters based on the Fisher criterion. The Gabor wavelet can have the properties of both multi-scale and multi-orientation. Based on our experiment, this method is more appropriate than traditional wavelet or single Gabor filter as it provides the better discrimination of the textures and improves the recognition rate effectively. Meanwhile, comparing with other fusion methods, it can reduce the complexity and improve the calculation efficiency.

Texture Classification Based on Morphological Subband Decomposition (모폴로지컬 부대역 분할에 기초한 질감영상 분류)

  • 김기석;도경훈;권갑현;하영호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.51-58
    • /
    • 1994
  • Mathematical morphology based on set theory is easy to be implemented in parallel and can be applied to various fields in image analysis. Particularly mophological pattern spectrum can detect critical scales in an image object and quantify various aspects of the shape-size content. In this paper, texture classification using pattern spectrum based on morphological subband decomposition is porposed. The low-low band extracts pattern spectrum features, and the high-low, low-high, and high-high bands extrack the structural information. This approach has the advantages of efficient information extraction, less time-consuming, high accuacy, less computation, and parallel implementation.

  • PDF