• Title/Summary/Keyword: Feature Classification

Search Result 2,155, Processing Time 0.026 seconds

Improved Bag of Visual Words Image Classification Using the Process of Feature, Color and Texture Information (특징, 색상 및 텍스처 정보의 가공을 이용한 Bag of Visual Words 이미지 자동 분류)

  • Park, Chan-hyeok;Kwon, Hyuk-shin;Kang, Seok-hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.79-82
    • /
    • 2015
  • Bag of visual words(BoVW) is one of the image classification and retrieval methods, using feature point that automatical sorting and searching system by image feature vector of data base. The existing method using feature point shall search or classify the image that user unwanted. To solve this weakness, when comprise the words, include not only feature point but color information that express overall mood of image or texture information that express repeated pattern. It makes various searching possible. At the test, you could see the result compared between classified image using the words that have only feature point and another image that added color and texture information. New method leads to accuracy of 80~90%.

  • PDF

An effective classification method for TFT-LCD film defect images using intensity distribution and shape analysis (명암도 분포 및 형태 분석을 이용한 효과적인 TFT-LCD 필름 결함 영상 분류 기법)

  • Noh, Chung-Ho;Lee, Seok-Lyong;Zo, Moon-Shin
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.8
    • /
    • pp.1115-1127
    • /
    • 2010
  • In order to increase the productivity in manufacturing TFT-LCD(thin film transistor-liquid crystal display), it is essential to classify defects that occur during the production and make an appropriate decision on whether the product with defects is scrapped or not. The decision mainly depends on classifying the defects accurately. In this paper, we present an effective classification method for film defects acquired in the panel production line by analyzing the intensity distribution and shape feature of the defects. We first generate a binary image for each defect by separating defect regions from background (non-defect) regions. Then, we extract various features from the defect regions such as the linearity of the defect, the intensity distribution, and the shape characteristics considering intensity, and construct a referential image database that stores those feature values. Finally, we determine the type of a defect by matching a defect image with a referential image in the database through the matching cost function between the two images. To verify the effectiveness of our method, we conducted a classification experiment using defect images acquired from real TFT-LCD production lines. Experimental results show that our method has achieved highly effective classification enough to be used in the production line.

A Robust Fingerprint Classification using SVMs with Adaptive Features (지지벡터기계와 적응적 특징을 이용한 강인한 지문분류)

  • Min, Jun-Ki;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.41-49
    • /
    • 2008
  • Fingerprint classification is useful to reduce the matching time of a huge fingerprint identification system by categorizing fingerprints into predefined classes according to their global features. Although global features are distributed diversly because of the uniqueness of a fingerprint, previous fingerprint classification methods extract global features non-adaptively from the fixed region for every fingerprint. We propose an novel method that extracts features adaptively for each fingerprint in order to classify various fingerprints effectively. It extracts ridge directional values as feature vectors from the region after searching the feature region by calculating variations of ridge directions, and classifies them using support vector machines. Experimental results with NIST4 database show that we have achieved a classification accuracy of 90.3% for the five-class problem and 93.7% for the four-class problem, and proved the validity of the proposed adaptive method by comparison with non-adaptively extracted feature vectors.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Robust Feature Parameter for Implementation of Speech Recognizer Using Support Vector Machines (SVM음성인식기 구현을 위한 강인한 특징 파라메터)

  • 김창근;박정원;허강인
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.195-200
    • /
    • 2004
  • In this paper we propose effective speech recognizer through two recognition experiments. In general, SVM is classification method which classify two class set by finding voluntary nonlinear boundary in vector space and possesses high classification performance under few training data number. In this paper we compare recognition performance of HMM and SVM at training data number and investigate recognition performance of each feature parameter while changing feature space of MFCC using Independent Component Analysis(ICA) and Principal Component Analysis(PCA). As a result of experiment, recognition performance of SVM is better than 1:.um under few training data number, and feature parameter by ICA showed the highest recognition performance because of superior linear classification.

Melanoma Classification Algorithm using Gray-level Conversion Matrix Feature and Support Vector Machine (회색도 변환 행렬 특징과 SVM을 이용한 흑색종 분류 알고리즘)

  • Koo, Jung Mo;Na, Sung Dae;Cho, Jin-Ho;Kim, Myoung Nam
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.130-137
    • /
    • 2018
  • Recently, human life is getting longer due to change of living environment and development of medical technology, and silver medical technology has been in the limelight. Geriatric skin disease is difficult to detect early, and when it is missed, it becomes a malignant disease and is difficult to treatment. Melanoma is one of the most common diseases of geriatric skin disease and initially has a similar modality with the nevus. In order to overcome this problem, we attempted to perform a feature analysis in order to attempt automatic detection of melanoma-like lesions. In this paper, one is first order analysis using information of pixels in radiomic feature. The other is a gray-level co-occurrence matrix and a gray level run length matrix, which are feature extraction methods for converting image information into a matrix. The features were extracted through these analyses. And classification is implemented by SVM.

Vehicle Detection and Classification Using Textural Similarity in Wavelet Domain (웨이브렛 영역에서의 질감 유사성을 이용한 차량검지 및 차종분류)

  • 임채환;박종선;이창섭;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.6B
    • /
    • pp.1191-1202
    • /
    • 1999
  • We propose an efficient vehicle detection and classification algorithm for an electronic toll collection using the feature which is robust to abrupt intensity change between consecutive frames. The local correlation coefficient between wavelet transformed input and reference images is used as such a feature, which takes advantage of textural similarity. The usefulness of the proposed feature is analyzed qualitatively by comparing the feature with the local variance of a difference image, and is verified by measuring the improvements in the separability of vehicle from shadowy or shadowless road for a real test image. Experimental results from field tests show that the proposed vehicle detection and classification algorithm performs well even under abrupt intensity change due to the characteristics of sensor and occurrence of shadow.

  • PDF

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

Efficient Tire Wear and Defect Detection Algorithm Based on Deep Learning (심층학습 기법을 활용한 효과적인 타이어 마모도 분류 및 손상 부위 검출 알고리즘)

  • Park, Hye-Jin;Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1026-1034
    • /
    • 2021
  • Tire wear and defect are important factors for safe driving condition. These defects are generally inspected by some specialized experts or very expensive equipments such as stereo depth camera and depth gauge. In this paper, we propose tire safety vision inspector based on deep neural network (DNN). The status of tire wear is categorized into three: 'safety', 'warning', and 'danger' based on depth of tire tread. We propose an attention mechanism for emphasizing the feature of tread area. The attention-based feature is concatenated to output feature maps of the last convolution layer of ResNet-101 to extract more robust feature. Through experiments, the proposed tire wear classification model improves 1.8% of accuracy compared to the existing ResNet-101 model. For detecting the tire defections, the developed tire defect detection model shows up-to 91% of accuracy using the Mask R-CNN model. From these results, we can see that the suggested models are useful for checking on the safety condition of working tire in real environment.