• Title/Summary/Keyword: Image Feature Vector

Search Result 500, Processing Time 0.023 seconds

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Implementation of a Feature Extraction Chip for High Speed OCR (고속 문자 인식을 위한 특정 추출용 칩의 구현)

  • 김형구;강선미;김덕진
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.6
    • /
    • pp.104-110
    • /
    • 1994
  • We proposed a high speed feature extraction algorithm and developed a feature vector extraction chip for high speed character recognition. It is hard to implement a high speed OCR by software alone with statistical method . Thus, the whole recognition process is divided into functional steps, then pipeline processed so that high speed processing is possible with temporal parallelism of the steps. In this paper we discuss the feature extraction step of the functional steps. To extract feature vector, a character image is normalized to 40$\times$40 pixels. Then, it is divided into 5$\times$5 subregions and 4x4 subregions to construct 41 overlapped subregions(10x10 pixels). It requires to execute more than 500 commands to extract a feature vector of a subregion by software. The proposed algorithm, however, requires only 10 cycles since it can extract a feature vector of a columm of subregion in one cycle with array structure. Thus, it is possible to process 12.000 characters per second with the proposed algorithm. The chip is implemented using EPLD and the effectiveness is proved by developing an OCR using it.

  • PDF

Region and Global-Specific PatchCore based Anomaly Detection from Chest X-ray Images

  • Hyunbin Kim;Junchul Chun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2298-2315
    • /
    • 2024
  • This paper introduces a method aimed at diagnosing the presence or absence of lesions by detecting anomalies in Chest X-ray images. The proposed approach is based on the PatchCore anomaly detection method, which extracts a feature vector containing location information of an image patch from normal image data and calculates the anomaly distance from the normal vector. However, applying PatchCore directly to medical image processing presents challenges due to the possibility of diseases occurring only in specific organs and the presence of image noise unrelated to lesions. In this study, we present an image alignment method that utilizes affine transformation parameter prediction to standardize already captured X-ray images into a specific composition. Additionally, we introduce a region-specific abnormality detection method that requires affine-transformed chest X-ray images. Furthermore, we propose a method to enhance application efficiency and performance through feature map hard masking. The experimental results demonstrate that our proposed approach achieved a maximum AUROC (Area Under the Receiver Operating Characteristic) of 0.774. Compared to a previous study conducted on the same dataset, our method shows a 6.9% higher performance and improved accuracy.

Statistical Image Feature Based Block Motion Estimation for Video Sequences (비디오 영상에서 통계적 영상특징에 의한 블록 모션 측정)

  • Bae, Young-Lae;Cho, Dong-Uk;Chun, Byung-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.1
    • /
    • pp.9-13
    • /
    • 2003
  • We propose a block motion estimation algorithm based on a statistical image feature for video sequences. The statistical feature of the reference block is obtained, then applied to select the candidate starting points (SPs) in the regular starting points pattern (SPP) by comparing the statistical feature of reference block with that of blocks which are spread ower regular SPP. The final SPs are obtained by their Mean Absolute Difference(MAD) value among the candidate SPs. Finally, one of conventional fast search algorithms, such as BRGDS, DS, and three-step search (TSS), has been applied to generate the motion vector of reference block using the final SPs as its starting points. The experimental results showed that the starting points from fine SPs were as dose as to the global minimum as we expected.

  • PDF

A Comparison of Global Feature Extraction Technologies and Their Performance for Image Identification (영상 식별을 위한 전역 특징 추출 기술과 그 성능 비교)

  • Yang, Won-Keun;Cho, A-Young;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.1-14
    • /
    • 2011
  • While the circulation of images become active, various requirements to manage increasing database are raised. The content-based technology is one of methods to satisfy these requirements. The image is represented by feature vectors extracted by various methods in the content-based technology. The global feature method insures fast matching speed because the feature vector extracted by the global feature method is formed into a standard shape. The global feature extraction methods are classified into two categories, the spatial feature extraction and statistical feature extraction. And each group is divided by what kind of information is used, color feature or gray scale feature. In this paper, we introduce various global feature extraction technologies and compare their performance by accuracy, recall-precision graph, ANMRR, feature vector size and matching time. According to the experiments, the spatial features show good performance in non-geometrical modifications, and the extraction technologies that use color and histogram feature show the best performance.

AUTOMATIC SELECTION AND ADJUSTMENT OF FEATURES FOR IMAGE CLASSIFICATION

  • Saiki, Kenji;Nagao, Tomoharu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.525-528
    • /
    • 2009
  • Recently, image classification has been an important task in various fields. Generally, the performance of image classification is not good without the adjustment of image features. Therefore, it is desired that the way of automatic feature extraction. In this paper, we propose an image classification method which adjusts image features automatically. We assume that texture features are useful in image classification tasks because natural images are composed of several types of texture. Thus, the classification accuracy rate is improved by using distribution of texture features. We obtain texture features by calculating image features from a current considering pixel and its neighborhood pixels. And we calculate image features from distribution of textures feature. Those image features are adjusted to image classification tasks using Genetic Algorithm. We apply proposed method to classifying images into "head" or "non-head" and "male" or "female".

  • PDF

Downscaling Forgery Detection using Pixel Value's Gradients of Digital Image (디지털 영상 픽셀값의 경사도를 이용한 Downscaling Forgery 검출)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.47-52
    • /
    • 2016
  • The used digital images in the smart device and small displayer has been a downscaled image. In this paper, the detection of the downscaling image forgery is proposed using the feature vector according to the pixel value's gradients. In the proposed algorithm, AR (Autoregressive) coefficients are computed from pixel value's gradients of the image. These coefficients as the feature vectors are used in the learning of a SVM (Support Vector Machine) classification for the downscaling image forgery detector. On the performance of the proposed algorithm, it is excellent at the downscaling 90% image forgery compare to MFR (Median Filter Residual) scheme that had the same 10-Dim. feature vectors and 686-Dim. SPAM (Subtractive Pixel Adjacency Matrix) scheme. In averaging filtering ($3{\times}3$) and median filtering ($3{\times}3$) images, it has a higher detection ratio. Especially, the measured performances of all items in averaging and median filtering ($3{\times}3$), AUC (Area Under Curve) by the sensitivity and 1-specificity is approached to 1. Thus, it is confirmed that the grade evaluation of the proposed algorithm is 'Excellent (A)'.

Language Identification by Fusion of Gabor, MDLC, and Co-Occurrence Features (Gabor, MDLC, Co-Occurrence 특징의 융합에 의한 언어 인식)

  • Jang, Ick-Hoon;Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.277-286
    • /
    • 2014
  • In this paper, we propose a texture feature-based language identification by fusion of Gabor, MDLC (multi-lag directional local correlation), and co-occurrence features. In the proposed method, for a test image, Gabor magnitude images are first obtained by Gabor transform followed by magnitude operator. Moments for the Gabor magniude images are then computed and vectorized. MDLC images are then obtained by MDLC operator and their moments are computed and vectorized. GLCM (gray-level co-occurrence matrix) is next calculated from the test image and co-occurrence features are computed using the GLCM, and the features are also vectorized. The three vectors of the Gabor, MDLC, and co-occurrence features are fused into a feature vector. In classification, the WPCA (whitened principal component analysis) classifier, which is usually adopted in the face identification, searches the training feature vector most similar to the test feature vector. We evaluate the performance of our method by examining averaged identification rates for a test document image DB obtained by scanning of documents with 15 languages. Experimental results show that the proposed method yields excellent language identification with rather low feature dimension for the test DB.

Emotion Recognition by Vision System (비젼에 의한 감성인식)

  • 이상윤;오재흥;주영훈;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.203-207
    • /
    • 2001
  • In this Paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using CCD color image. To do this, we first acquire the color image from the CCD camera, and then propose the method for recognizing the expression to be represented the structural correlation of man's feature Points(eyebrows, eye, nose, mouse) It is central technology that the Process of extract, separate and recognize correct data in the image. for representation is expressed by structural corelation of human's feature Points In the Proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Had separated complexion area using color-difference of color space by method that have separated background and human's face toughly to change such as external illumination in this paper. For this, we propose an algorithm to extract four feature Points from the face image acquired by the color CCD camera and find normalization face picture and some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector. Finally, we show the Practical application possibility of the proposed method.

  • PDF