• Title/Summary/Keyword: Faces Recognition

Search Result 223, Processing Time 0.023 seconds

The Effect of Emotional Expression Change, Delay, and Background at Retrieval on Face Recognition (얼굴자극의 검사단계 표정변화와 검사 지연시간, 자극배경이 얼굴재인에 미치는 효과)

  • Youngshin Park
    • Korean Journal of Culture and Social Issue
    • /
    • v.20 no.4
    • /
    • pp.347-364
    • /
    • 2014
  • The present study was conducted to investigate how emotional expression change, test delay, and background influence on face recognition. In experiment 1, participants were presented with negative faces at study phase and administered for standard old-new recognition test including targets of negative and neutral expression for the same faces. In experiment 2, participants were studied negative faces and tested by old-new face recognition test with targets of negative and positive faces. In experiment 3, participants were presented with neutral faces at study phase and had to identify the same faces with no regard for negative and neutral expression at face recognition test. In all three experiments, participants were assigned into either immediate test or delay test, and target faces were presented in both white and black background. Results of experiments 1 and 2 indicated higher rates for negative faces than neutral or positive faces. Facial expression consistency enhanced face recognition memory. In experiment 3, the superiority of facial expression consistency were demonstrated by higher rates for neutral faces at recognition test. If facial expressions were consistent across encoding and retrieval, memory performance on face recognition were enhanced in all three experiments. And the effect of facial expression change have different effects on background conditions. The findings suggest that facial expression change make face identification hard, and time and background also affect on face recognition.

  • PDF

Real-time Face Detection and Recognition using Classifier Based on Rectangular Feature and AdaBoost (사각형 특징 기반 분류기와 AdaBoost 를 이용한 실시간 얼굴 검출 및 인식)

  • Kim, Jong-Min;Lee, Woong-Ki
    • Journal of Integrative Natural Science
    • /
    • v.1 no.2
    • /
    • pp.133-139
    • /
    • 2008
  • Face recognition technologies using PCA(principal component analysis) recognize faces by deciding representative features of faces in the model image, extracting feature vectors from faces in a image and measuring the distance between them and face representation. Given frequent recognition problems associated with the use of point-to-point distance approach, this study adopted the K-nearest neighbor technique(class-to-class) in which a group of face models of the same class is used as recognition unit for the images inputted on a continual input image. This paper proposes a new PCA recognition in which database of faces.

  • PDF

Face Recognition using Vector Quantizer in Eigenspace (아이겐공간에서 벡터 양자기를 이용한 얼굴인식)

  • 임동철;이행세;최태영
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.185-192
    • /
    • 2004
  • This paper presents face recognition using vector quantization in the eigenspace of the faces. The existing eigenface method is not enough for representing the variations of faces. For making up for its defects, the proposed method use a clustering of feature vectors by vector quantization in eigenspace of the faces. In the trainning stage, the face images are transformed the points in the eigenspace by eigeface(eigenvetor) and we represent a set of points for each people as the centroids of vector quantizer. In the recognition stage, the vector quantizer finds the centroid having the minimum quantization error between feature vector of input image and centriods of database. The experiments are performed by 600 faces in Faces94 database. The existing eigenface method has minimum 64 miss-recognition and the proposed method has minimum 20 miss-recognition when we use 4 codevectors. In conclusion, the proposed method is a effective method that improves recognition rate through overcoming the variation of faces.

Face Detection Algorithm for Automatic Teller Machine(ATM) (현금 인출기 적용을 위한 얼굴인식 알고리즘)

  • 이혁범;유지상
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1041-1049
    • /
    • 2000
  • A face recognition algorithm for the user identification procedure of automatic teller machine(ATM), as an application of the still image processing techniques is proposed in this paper. In the proposed algorithm, face recognition techniques, especially, face region detection, eye and mouth detection schemes, which can distinguish abnormal faces from normal faces, are proposed. We define normal face, which is acceptable, as a face without sunglasses or a mask, and abnormal face, which is non-acceptable, as that wearing both, or either one of them. The proposed face recognition algorithm is composed of three stages: the face region detection stage, the preprocessing stage for facial feature detection and the eye and mouth detection stage. Experimental results show that the proposed algorithm can distinguish abnormal faces from normal faces accurately from restrictive sample images.

  • PDF

Masked Face Recognition via a Combined SIFT and DLBP Features Trained in CNN Model

  • Aljarallah, Nahla Fahad;Uliyan, Diaa Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.319-331
    • /
    • 2022
  • The latest global COVID-19 pandemic has made the use of facial masks an important aspect of our lives. People are advised to cover their faces in public spaces to discourage illness from spreading. Using these face masks posed a significant concern about the exactness of the face identification method used to search and unlock telephones at the school/office. Many companies have already built the requisite data in-house to incorporate such a scheme, using face recognition as an authentication. Unfortunately, veiled faces hinder the detection and acknowledgment of these facial identity schemes and seek to invalidate the internal data collection. Biometric systems that use the face as authentication cause problems with detection or recognition (face or persons). In this research, a novel model has been developed to detect and recognize faces and persons for authentication using scale invariant features (SIFT) for the whole segmented face with an efficient local binary texture features (DLBP) in region of eyes in the masked face. The Fuzzy C means is utilized to segment the image. These mixed features are trained significantly in a convolution neural network (CNN) model. The main advantage of this model is that can detect and recognizing faces by assigning weights to the selected features aimed to grant or provoke permissions with high accuracy.

Real-time multiple face recognition system based on one-shot panoramic scanning (원샷 파노라믹 스캐닝 기반 실시간 다수 얼굴 인식 시스템)

  • Kim, Daehwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.553-555
    • /
    • 2022
  • This paper is about a real-time automatic face recognition system based on one-shot panoramic scanning. It detects multiple faces in real time through a single panoramic scanning process and recognizes pre-registered faces. Instead of recognizing multiple faces within a single panoramic image, multiple faces are recognized using multiple images obtained in the scanning process. This reduces the panorama image creation time and stitching error, and at the same time can improve the face recognition performance by using the accumulated information of multiple images. It is expected that it can be used in various applications such as a multi-person smart attendance system with only a simple image acquisition device.

  • PDF

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

Three-dimensional Face Recognition based on Feature Points Compression and Expansion

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Park, Sang-min;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk;Son, Byounghee
    • Journal of Multimedia Information System
    • /
    • v.6 no.2
    • /
    • pp.91-98
    • /
    • 2019
  • Many researchers have attempted to recognize three-dimensional faces using feature points extracted from two-dimensional facial photographs. However, due to the limit of flat photographs, it is very difficult to recognize faces rotated more than 15 degrees from original feature points extracted from the photographs. As such, it is difficult to create an algorithm to recognize faces in multiple angles. In this paper, it is proposed a new algorithm to recognize three-dimensional face recognition based on feature points extracted from a flat photograph. This method divides into six feature point vector zones on the face. Then, the vector value is compressed and expanded according to the rotation angle of the face to recognize the feature points of the face in a three-dimensional form. For this purpose, the average of the compressibility and the expansion rate of the face data of 100 persons by angle and face zone were obtained, and the face angle was estimated by calculating the distance between the middle of the forehead and the tail of the eye. As a result, very improved recognition performance was obtained at 30 degrees of rotated face angle.

Neural correlations of familiar and Unfamiliar face recognition by using Event Related fMRI

  • Kim, Jeong-Seok;Jeun, Sin-Soo;Kim, Bum-Soo;Choe, Bo-Young;Lee, Hyoung-Koo;Suh, Tae-Suk
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2003.09a
    • /
    • pp.78-78
    • /
    • 2003
  • Purpose: This event related fMRI study was to further our understanding about how different brain regions could contribute to effective access of specific information stored in long term memory. This experiment has allowed us to determine the brain regions involved in recognition of familiar faces among non familiar faces. Materials and Methods: Twelve right handed normal, healthy volunteer adults participated in face recognition experiment. The paradigm consists of two 40 familiar faces, 40 unfamiliar faces and control base with scrambled faces in a randomized order, with null events. Volunteers were instructed to press on one of two possible buttons of a response box to indicate whether a face was familiar or not. Incorrect answers were ignored. A 1.5T MRI system(GMENS) was employed to evaluate brain activity by using blood oxygen level dependent (BOLD) contrast. Gradient Echo EPI sequence with TR/TE= 2250/40 msec was used for 17 contiguous axial slices of 7mm thickness, covering the whole brain volume (240mm Field of view, 64 ${\times}$ 64 in plane resolution). The acquired data were applied to SPM99 for the processing such as realignment, normalization, smoothing, statistical ANOVA and statistical preference. Results/Disscusion: The comparison of familiar faces vs unfamiliar faces yielded significant activations in the medial temporal regions, the occipito temporal regions and in frontal regions. These results suggest that when volunteers are asked to recognize familiar faces among unfamiliar faces they tend to activate several regions frequently involved in face perception. The medial temporal regions are also activated for familiar and unfamiliar faces. This interesting result suggests a contribution of this structure in the attempt to match perceived faces with pre existing semantic representations stored in long term memory.

  • PDF

Global Feature Extraction and Recognition from Matrices of Gabor Feature Faces

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.207-211
    • /
    • 2011
  • This paper presents a method for facial feature representation and recognition from the Covariance Matrices of the Gabor-filtered images. Gabor filters are a very powerful tool for processing images that respond to different local orientations and wave numbers around points of interest, especially on the local features on the face. This is a very unique attribute needed to extract special features around the facial components like eyebrows, eyes, mouth and nose. The Covariance matrices computed on Gabor filtered faces are adopted as the feature representation for face recognition. Geodesic distance measure is used as a matching measure and is preferred for its global consistency over other methods. Geodesic measure takes into consideration the position of the data points in addition to the geometric structure of given face images. The proposed method is invariant and robust under rotation, pose, or boundary distortion. Tests run on random images and also on publicly available JAFFE and FRAV3D face recognition databases provide impressively high percentage of recognition.