• Title/Summary/Keyword: Face expression

Search Result 453, Processing Time 0.027 seconds

Development of an intelligent camera for multiple body temperature detection (다중 체온 감지용 지능형 카메라 개발)

  • Lee, Su-In;Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.430-436
    • /
    • 2022
  • In this paper, we propose an intelligent camera for multiple body temperature detection. The proposed camera is composed of optical(4056*3040) and thermal(640*480), which detects abnormal symptoms by analyzing a person's facial expression and body temperature from the acquired image. The optical and thermal imaging cameras are operated simultaneously and detect an object in the optical image, in which the facial region and expression analysis are calculated from the object. Additionally, the calculated coordinate values from the optical image facial region are applied to the thermal image, also the maximum temperature is measured from the region and displayed on the screen. Abnormal symptom detection is determined by using the analyzed three facial expressions(neutral, happy, sadness) and body temperature values. In order to evaluate the performance of the proposed camera, the optical image processing part is tested on Caltech, WIDER FACE, and CK+ datasets for three algorithms(object detection, facial region detection, and expression analysis). Experimental results have shown 91%, 91%, and 84% accuracy scores each.

Facial Local Region Based Deep Convolutional Neural Networks for Automated Face Recognition (자동 얼굴인식을 위한 얼굴 지역 영역 기반 다중 심층 합성곱 신경망 시스템)

  • Kim, Kyeong-Tae;Choi, Jae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.4
    • /
    • pp.47-55
    • /
    • 2018
  • In this paper, we propose a novel face recognition(FR) method that takes advantage of combining weighted deep local features extracted from multiple Deep Convolutional Neural Networks(DCNNs) learned with a set of facial local regions. In the proposed method, the so-called weighed deep local features are generated from multiple DCNNs each trained with a particular face local region and the corresponding weight represents the importance of local region in terms of improving FR performance. Our weighted deep local features are applied to Joint Bayesian metric learning in conjunction with Nearest Neighbor(NN) Classifier for the purpose of FR. Systematic and comparative experiments show that our proposed method is robust to variations in pose, illumination, and expression. Also, experimental results demonstrate that our method is feasible for improving face recognition performance.

Robust Face Recognition under Limited Training Sample Scenario using Linear Representation

  • Iqbal, Omer;Jadoon, Waqas;ur Rehman, Zia;Khan, Fiaz Gul;Nazir, Babar;Khan, Iftikhar Ahmed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3172-3193
    • /
    • 2018
  • Recently, several studies have shown that linear representation based approaches are very effective and efficient for image classification. One of these linear-representation-based approaches is the Collaborative representation (CR) method. The existing algorithms based on CR have two major problems that degrade their classification performance. First problem arises due to the limited number of available training samples. The large variations, caused by illumintion and expression changes, among query and training samples leads to poor classification performance. Second problem occurs when an image is partially noised (contiguous occlusion), as some part of the given image become corrupt the classification performance also degrades. We aim to extend the collaborative representation framework under limited training samples face recognition problem. Our proposed solution will generate virtual samples and intra-class variations from training data to model the variations effectively between query and training samples. For robust classification, the image patches have been utilized to compute representation to address partial occlusion as it leads to more accurate classification results. The proposed method computes representation based on local regions in the images as opposed to CR, which computes representation based on global solution involving entire images. Furthermore, the proposed solution also integrates the locality structure into CR, using Euclidian distance between the query and training samples. Intuitively, if the query sample can be represented by selecting its nearest neighbours, lie on a same linear subspace then the resulting representation will be more discriminate and accurately classify the query sample. Hence our proposed framework model the limited sample face recognition problem into sufficient training samples problem using virtual samples and intra-class variations, generated from training samples that will result in improved classification accuracy as evident from experimental results. Moreover, it compute representation based on local image patches for robust classification and is expected to greatly increase the classification performance for face recognition task.

Face Recognition Using Fisherface Algorithm and Fixed Graph Matching (Fisherface 알고리즘과 Fixed Graph Matching을 이용한 얼굴 인식)

  • Lee, Hyeong-Ji;Jeong, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.6
    • /
    • pp.608-616
    • /
    • 2001
  • This paper proposes a face recognition technique that effectively combines fixed graph matching (FGM) and Fisherface algorithm. EGM as one of dynamic link architecture uses not only face-shape but also the gray information of image, and Fisherface algorithm as a class specific method is robust about variations such as lighting direction and facial expression. In the proposed face recognition adopting the above two methods, linear projection per node of an image graph reduces dimensionality of labeled graph vector and provides a feature space to be used effectively for the classification. In comparison with a conventional EGM, the proposed approach could obtain satisfactory results in the perspectives of recognition speeds. Especially, we could get higher average recognition rate of 90.1% than the conventional methods by hold-out method for the experiments with the Yale Face Databases and Olivetti Research Laboratory (ORL) Databases.

  • PDF

Main Region and Color Extraction of Face for Heart Disease Diagnosis (심장 질환 진단을 위한 얼굴 주요 영역 및 색상 추출)

  • Cho Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.215-222
    • /
    • 2006
  • People health improvement is becoming new subject through the combining with the oriental medicine diagnosis theory and IT technology. To do this, firstly, it needs sicked data that supply the visualization, objectification and quantification method. Especially, if an ocular inspection can be more objective and visual expression in oriental medicine, it seems to offer the biggest opportunity in diagnosis field. In this study, I propose a diagnosis to check the symptoms of heart diagnosis. Our research aim is on the visualization of diagnosis using image processing system which it can be actual analysis about the symptom of heart. To catch up this study, through the color support assistance by face image processing, I devide the face area and analyze the face form and also extract face characteristic point in heart disease diagnosis using oriental medicine based on an ocular inspection method. I would like to prove the usefulness of the method that proposed by an experiment.

An Efficient Face Recognition by Using Centroid Shift and Mutual Information Estimation (중심이동과 상호정보 추정에 의한 효과적인 얼굴인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.4
    • /
    • pp.511-518
    • /
    • 2007
  • This paper presents an efficient face recognition method by using both centroid shift and mutual information estimation of images. The centroid shift is to move an image to center coordinate calculated by first moment, which is applied to improve the recognition performance by excluding the needless backgrounds in face image. The mutual information which is a measurements of correlations, is applied to efficiently measure the similarity between images. Adaptive partition mutual information(AP-MI) estimation is especially applied to find an accurate dependence information by equally partitioning the samples of input image for calculating the probability density function(PDF). The proposed method has been applied to the problem for recognizing the 48 face images(12 persons * 4 scenes) of 64*64 pixels. The experimental results show that the proposed method has a superior recognition performances(speed, rate) than a conventional method without centroid shift. The proposed method has also robust performance to changes of facial expression, position, and angle, etc. respectively.

Content analysis of embroidery patterns of Korean traditional Beoseonbongips (한국 전통 버선본집 자수문양 콘텐츠 분석)

  • Hong, Heesook
    • The Research Journal of the Costume Culture
    • /
    • v.23 no.4
    • /
    • pp.705-725
    • /
    • 2015
  • A Beoseonbongip is a pouch that holds patterns for making Beoseons. This study aimed to identify the aesthetic and symbolic contents of the embroidery patterns by analyzing the kind, combination types, expression and arrangement types of patterns. In total, 140 Beoseonbongip artifacts, which were mostly made in the Joseon Dynasty, were quantitatively and qualitatively analyzed. The results indicated that about 83% of the total had flower patterns. Various kinds of embroidery patterns used for Beoseonbongips were newly identified. About 73% of the total had different kinds of patterns. Pattern combination types were identified by the kinds of patterns, the number of paired patterns, and the traditional painting styles used. The patterns of Beoseonbongips were expressed schematically more than realistically or abstractly. Beoseonbongips with different patterns on the four triangle tips of the front face and Beoseonbongips with the same/similar patterns on two opposite tips of the front face were observed more than the other types. On the back face, the embroidery patterns were symmetrically arranged, showing various division structures. It was inferred that wishes (e.g., marital harmony, fertility, good health and longevity, happiness, and wealth and fame) were expressed through the symbolic patterns embroidered on the Beoseonbongips. In terms of Korean traditional beauty, the union with nature, the harmony of yin and yang, symmetric balance, and neatness were also emphasized as a esthetic characteristics of Beoseonbongips.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Discriminant Metric Learning Approach for Face Verification

  • Chen, Ju-Chin;Wu, Pei-Hsun;Lien, Jenn-Jier James
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.742-762
    • /
    • 2015
  • In this study, we propose a distance metric learning approach called discriminant metric learning (DML) for face verification, which addresses a binary-class problem for classifying whether or not two input images are of the same subject. The critical issue for solving this problem is determining the method to be used for measuring the distance between two images. Among various methods, the large margin nearest neighbor (LMNN) method is a state-of-the-art algorithm. However, to compensate the LMNN's entangled data distribution due to high levels of appearance variations in unconstrained environments, DML's goal is to penalize violations of the negative pair distance relationship, i.e., the images with different labels, while being integrated with LMNN to model the distance relation between positive pairs, i.e., the images with the same label. The likelihoods of the input images, estimated using DML and LMNN metrics, are then weighted and combined for further analysis. Additionally, rather than using the k-nearest neighbor (k-NN) classification mechanism, we propose a verification mechanism that measures the correlation of the class label distribution of neighbors to reduce the false negative rate of positive pairs. From the experimental results, we see that DML can modify the relation of negative pairs in the original LMNN space and compensate for LMNN's performance on faces with large variances, such as pose and expression.

Patch based Semi-supervised Linear Regression for Face Recognition

  • Ding, Yuhua;Liu, Fan;Rui, Ting;Tang, Zhenmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3962-3980
    • /
    • 2019
  • To deal with single sample face recognition, this paper presents a patch based semi-supervised linear regression (PSLR) algorithm, which draws facial variation information from unlabeled samples. Each facial image is divided into overlapped patches, and a regression model with mapping matrix will be constructed on each patch. Then, we adjust these matrices by mapping unlabeled patches to $[1,1,{\cdots},1]^T$. The solutions of all the mapping matrices are integrated into an overall objective function, which uses ${\ell}_{2,1}$-norm minimization constraints to improve discrimination ability of mapping matrices and reduce the impact of noise. After mapping matrices are computed, we adopt majority-voting strategy to classify the probe samples. To further learn the discrimination information between probe samples and obtain more robust mapping matrices, we also propose a multistage PSLR (MPSLR) algorithm, which iteratively updates the training dataset by adding those reliably labeled probe samples into it. The effectiveness of our approaches is evaluated using three public facial databases. Experimental results prove that our approaches are robust to illumination, expression and occlusion.