• Title/Summary/Keyword: Visual Feature Extraction

Search Result 141, Processing Time 0.032 seconds

Content-Based Image Retrieval Using Visual Features and Fuzzy Integral (시각 특징과 퍼지 적분을 이용한 내용기반 영상 검색)

  • Song Young-Jun;Kim Nam;Kim Mi-Hye;Kim Dong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.5
    • /
    • pp.20-28
    • /
    • 2006
  • This paper proposes visual-feature extraction for each band in wavelet domain with both spatial frequency features and multi resolution features, and the combination of visual features using fuzzy integral. In addition, it uses color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Also, it is found that the final similarity can be represented in a linear combination of the respective factors(Homogram, color, energy) when each factor is independent one another. With respect to the combination patterns the fuzzy measurement is defined and the fuzzy integral is taken. Experiments are peformed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Visual Feature Extraction for Image Retrieval using Wavelet Coefficient’s Fuzzy Homogeneity and High Frequency Energy (웨이브릿 계수의 퍼지 동질성과 고주파 에너지를 이용한 영상 검색용 특징벡터 추출)

  • 박원배;류은주;송영준
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.1
    • /
    • pp.18-23
    • /
    • 2004
  • In this paper, we propose a new visual feature extraction method for content-based image retrieval(CBIR) based on wavelet transform which has both spatial-frequency characteristic and multi-resolution characteristic. We extract visual features for each frequency band in wavelet transformation and use them to CBIR. The lowest frequency band involves spacial information of original image. We extract L feature vectors using fuzzy homogeneity in the wavelet domain, which consider both the wavelet coefficients and the spacial information of each coefficient. Also, we extract 3 feature vectors wing the energy values of high frequency bands, and store those to image database. As a query, we retrieve the most similar image from image database according to the 10 largest homograms(normalized fuzzy homogeneity vectors) and 3 energy values. Simulation results show that the proposed method has good accuracy in image retrieval using 90 texture images.

  • PDF

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Lip Feature Extraction using Contrast of YCbCr (YCbCr 농도 대비를 이용한 입술특징 추출)

  • Kim, Woo-Sung;Min, Kyung-Won;Ko, Han-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.259-260
    • /
    • 2006
  • Since audio speech recognition is affected by noise in real environment, visual speech recognition is used to support speech recognition. For the visual speech recognition, this paper suggests the extraction of lip-feature using two types of image segmentation and reduced ASM. Input images are transformed to YCbCr based images and lips are segmented using the contrast of Y/Cb/Cr between lip and face. Subsequently, lip-shape model trained by PCA is placed on segmented lip region and then lip features are extracted using ASM.

  • PDF

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

Facial Features Extraction for Sasang Constitution Classification (사상채질 분류를 위한 안면부내 특징 요소 추출)

  • Bae, Na-Yeong;An, Taek-Won;Jo, Dong-Uk;Lee, Hwa-Seop
    • Journal of Sasang Constitutional Medicine
    • /
    • v.17 no.2
    • /
    • pp.46-51
    • /
    • 2005
  • 1. Objectives The purpose of this study is to objectify the diagnosis of Sasang Constitution. Using the methods of this study, it will improve to classificate Sasang Constitution. 2. Methods 1) Automatic feature extraction of human frontal faces for Sasang Constitution classification. 2) Color feature extraction of human frontal faces (1)Erosion filtering (skin-white, the other-black) (2) Median median 3. Results and Conclusions Observing a person's shape has been the major method for Sasang Constitution classification, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for Sasang Constitution classification. For this, in this paper, firstly, the signal processing techniques are applied to automatic feature extraction of human frontal faces for Sasang Constitution classification. The experiment is conducted to verify the effectiveness of the proposed system.

  • PDF

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.

Automatic Extraction of Stable Visual Landmarks for a Mobile Robot under Uncertainty (이동로봇의 불확실성을 고려한 안정한 시각 랜드마크의 자동 추출)

  • Moon, In-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.758-765
    • /
    • 2001
  • This paper proposes a method to automatically extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably from various viewpoints. Since the feature information such as position and length includes uncertainty due to errors of vision and motion, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from extracted features. This extraction process is performed on-line to adapt to an actual changes of lighting and scene depending on the robot’s view. Experimental results in various real scenes show the validity of the proposed method.

  • PDF