• Title/Summary/Keyword: face feature

Search Result 882, Processing Time 0.026 seconds

A Resampling Method for Small Sample Size Problems in Face Recognition using LDA (LDA를 이용한 얼굴인식에서의 Small Sample Size문제 해결을 위한 Resampling 방법)

  • Oh, Jae-Hyun;Kwak, Jo-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.78-88
    • /
    • 2009
  • In many face recognition problems, the number of available images is limited compared to the dimension of the input space which is usually equal to the number of pixels. This problem is called as the 'small sample size' problem and regularization methods are typically used to solve this problem in feature extraction methods such as LDA. By using regularization methods, the modified within class matrix becomes nonsingu1ar and LDA can be performed in its original form. However, in the process of adding a scaled version of the identity matrix to the original within scatter matrix, the scale factor should be set heuristically and the performance of the recognition system depends on highly the value of the scalar factor. By using the proposed resampling method, we can generate a set of images similar to but slightly different from the original image. With the increased number of images, the small sample size problem is alleviated and the classification performance increases. Unlike regularization method, the resampling method does not suffer from the heuristic setting of the parameter producing better performance.

The Size Correction Method of Eyes Region using Morphing (모핑을 이용한 눈 영역 크기 보정 기법)

  • Goo, Eun-jin;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.83-86
    • /
    • 2013
  • In this paper, by using the Morphing, if the size of the eyes of both sides are not the same, we propose a method to correct the size of eyes area. First, by using the Haar-like feature from a input image that is input, to detect the shape of the eyes and face. After inverting the left and right eye region of one of the shape of the eyes detected sets the correspondence between the second with a line to control the shape of the eyes detected using eyes that is detected with canny edge, in the previous step. To the Warping to match the correspondence was then set in the previous step, an area of each eye. Then, I merge the image which merged in the eye area is detected from the original image. As a result, a system result of the experiment in the test image and face image seen from the front, the proposed, prove to be more efficient than a method of keying the size of the eye only.

  • PDF

On Optimizing Dissimilarity-Based Classifier Using Multi-level Fusion Strategies (다단계 퓨전기법을 이용한 비유사도 기반 식별기의 최적화)

  • Kim, Sang-Woon;Duin, Robert P. W.
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.15-24
    • /
    • 2008
  • For high-dimensional classification tasks, such as face recognition, the number of samples is smaller than the dimensionality of the samples. In such cases, a problem encountered in linear discriminant analysis-based methods for dimension reduction is what is known as the small sample size (SSS) problem. Recently, to solve the SSS problem, a way of employing a dissimilarity-based classification(DBC) has been investigated. In DBC, an object is represented based on the dissimilarity measures among representatives extracted from training samples instead of the feature vector itself. In this paper, we propose a new method of optimizing DBCs using multi-level fusion strategies(MFS), in which fusion strategies are employed to represent features as well as to design classifiers. Our experimental results for benchmark face databases demonstrate that the proposed scheme achieves further improved classification accuracies.

Gender Classification System Based on Deep Learning in Low Power Embedded Board (저전력 임베디드 보드 환경에서의 딥 러닝 기반 성별인식 시스템 구현)

  • Jeong, Hyunwook;Kim, Dae Hoe;Baddar, Wisam J.;Ro, Yong Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.1
    • /
    • pp.37-44
    • /
    • 2017
  • While IoT (Internet of Things) industry has been spreading, it becomes very important for object to recognize user's information by itself without any control. Above all, gender (male, female) is dominant factor to analyze user's information on account of social and biological difference between male and female. However since each gender consists of diverse face feature, face-based gender classification research is still in challengeable research field. Also to apply gender classification system to IoT, size of device should be reduced and device should be operated with low power. Consequently, To port the function that can classify gender in real-world, this paper contributes two things. The first one is new gender classification algorithm based on deep learning and the second one is to implement real-time gender classification system in embedded board operated by low power. In our experiment, we measured frame per second for gender classification processing and power consumption in PC circumstance and mobile GPU circumstance. Therefore we verified that gender classification system based on deep learning works well with low power in mobile GPU circumstance comparing to in PC circumstance.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

Machine Learning-Based Malicious URL Detection Technique (머신러닝 기반 악성 URL 탐지 기법)

  • Han, Chae-rim;Yun, Su-hyun;Han, Myeong-jin;Lee, Il-Gu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.555-564
    • /
    • 2022
  • Recently, cyberattacks are using hacking techniques utilizing intelligent and advanced malicious codes for non-face-to-face environments such as telecommuting, telemedicine, and automatic industrial facilities, and the damage is increasing. Traditional information protection systems, such as anti-virus, are a method of detecting known malicious URLs based on signature patterns, so unknown malicious URLs cannot be detected. In addition, the conventional static analysis-based malicious URL detection method is vulnerable to dynamic loading and cryptographic attacks. This study proposes a technique for efficiently detecting malicious URLs by dynamically learning malicious URL data. In the proposed detection technique, malicious codes are classified using machine learning-based feature selection algorithms, and the accuracy is improved by removing obfuscation elements after preprocessing using Weighted Euclidean Distance(WED). According to the experimental results, the proposed machine learning-based malicious URL detection technique shows an accuracy of 89.17%, which is improved by 2.82% compared to the conventional method.

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.

Illumination Robust Feature Descriptor Based on Exact Order (조명 변화에 강인한 엄격한 순차 기반의 특징점 기술자)

  • Kim, Bongjoe;Sohn, Kwanghoon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.77-87
    • /
    • 2013
  • In this paper, we present a novel method for local image descriptor called exact order based descriptor (EOD) which is robust to illumination changes and Gaussian noise. Exact orders of image patch is induced by changing discrete intensity value into k-dimensional continuous vector to resolve the ambiguity of ordering for same intensity pixel value. EOD is generated from overall distribution of exact orders in the patch. The proposed local descriptor is compared with several state-of-the-art descriptors over a number of images. Experimental results show that the proposed method outperforms many state-of-the-art descriptors in the presence of illumination changes, blur and viewpoint change. Also, the proposed method can be used for many computer vision applications such as face recognition, texture recognition and image analysis.

A New Intermediate View Reconstruction Scheme based-on Stereo Image Rectification Algorithm (스테레오 영상 보정 알고리즘에 기반한 새로운 중간시점 영상합성 기법)

  • 박창주;고정환;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.632-641
    • /
    • 2004
  • In this paper, a new intermediate view reconstruction method employing a stereo image rectification algorithm by which an uncalibrated input stereo image can be transformed into the calibrated one is suggested and its performance is analyzed. In the proposed method, feature point are extracted from the stereo image pair though detection of the corners and similarities between each pixel of the stereo image. And then, using these detected feature points, the moving vectors between stereo image and the epipolar line is extracted. Finally, the input stereo image is rectified by matching the extracted epipolar line between the stereo image in the horizontal direction and intermediate views are reconstructed by using these rectified stereo images. From some experiments on synthesis of the intermediate views by using three kinds of stereo image; a CCETT's stereo image of 'Man' and two stereo images of 'Face' & 'Car' captured by real camera, it is analyzed that PSNRs of the intermediate views reconstructed from the calibrated image by using the proposed rectification algorithm are improved by 2.5㏈ for 'Man', 4.26㏈ for 'Pace' and 3.85㏈ for 'Car' than !hose of the uncalibrated ones. This good experimental result suggests a possibility of practical application of the unposed stereo image rectification algorithm-based intermediate view reconstruction view to the uncalibrated stereo images.