• Title/Summary/Keyword: 실시간 얼굴영역인식

Search Result 104, Processing Time 0.024 seconds

3D Face Recognition using Wavelet Transform Based on Fuzzy Clustering Algorithm (펴지 군집화 알고리즘 기반의 웨이블릿 변환을 이용한 3차원 얼굴 인식)

  • Lee, Yeung-Hak
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1501-1514
    • /
    • 2008
  • The face shape extracted by the depth values has different appearance as the most important facial information. The face images decomposed into frequency subband are signified personal features in detail. In this paper, we develop a method for recognizing the range face images by multiple frequency domains for each depth image using the modified fuzzy c-mean algorithm. For the proposed approach, the first step tries to find the nose tip that has a protrusion shape on the face from the extracted face area. And the second step takes into consideration of the orientated frontal posture to normalize. Multiple contour line areas which have a different shape for each person are extracted by the depth threshold values from the reference point, nose tip. And then, the frequency component extracted from the wavelet subband can be adopted as feature information for the authentication problems. The third step of approach concerns the application of eigenface to reduce the dimension. And the linear discriminant analysis (LDA) method to improve the classification ability between the similar features is adapted. In the last step, the individual classifiers using the modified fuzzy c-mean method based on the K-NN to initialize the membership degree is explained for extracted coefficient at each resolution level. In the experimental results, using the depth threshold value 60 (DT60) showed the highest recognition rate among the extracted regions, and the proposed classification method achieved 98.3% recognition rate, incase of fuzzy cluster.

  • PDF

Real-Time Hand Gesture Tracking & Recognition (실시간 핸드 제스처 추적 및 인식)

  • Ha, Jeong-Yo;Kim, Gye-Young;Choi, Hyung-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.141-144
    • /
    • 2010
  • 본 논문에서는 컴퓨터 비전에 기반을 둔 방법으로 실시간으로 사람의 손의 모양을 인식하는 알고리즘을 제안한다. 기본적인 전처리 과정과 피부 값의 검출을 통해서 사용자의 피부색상을 검출한 후 팔 영역과 얼굴영역을 제거하고, 손 영역만 검출한 뒤 손의 무게중심을 구한다. 그 후에 손의 궤적을 추적하기 위해 칼만필터를 이용하였으며, 손의 모양을 인식하기 위한 방법으로 Hidden Markov Model을 이용하여 사용자의 손 모양 6가지를 학습한 후 인식하였다. 실험을 통하여 제안한 방법의 효과를 입증하였다.

  • PDF

Real-Time Face Recognition System using PDA (PDA를 이용한 실시간 얼굴인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Go Hyoun-Joo;Kim Jin-Whan;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.649-654
    • /
    • 2005
  • In this paper, we describe an implementation of real-time face recognition system under ubiquitous computing environments. First, face image is captured by PDA with CMOS camera and then this image with user n and name is transmitted via WLAN(Wireless LAN) to the server and finally PDA receives verification result from the server The proposed system consists of server and client parts. Server uses PCA and LDA algorithm which calculates eigenvector and eigenvalue matrices using the face images from the PDA at enrollment process. And then, it sends recognition result using Euclidean distance at verification process. Here, captured image is first compressed by the wave- let transform and sent as JPG format for real-time processing. Implemented system makes an improvement of the speed and performance by comparing Euclidean distance with previously calculated eigenvector and eignevalue matrices in the learning process.

Implementation of Immersive Interactive Content Using Face Recognition Technology - (Exhibition of ReneMagritte) Focused on 'ARPhotoZone' (얼굴 인식 기술을 활용한 실감형 인터랙티브 콘텐츠의 구현 - (르네마그리트 특별전) AR포토존을 중심으로)

  • Lee, Eun-Jin;Sung, Jung-Hwan
    • Journal of Korea Game Society
    • /
    • v.20 no.5
    • /
    • pp.13-20
    • /
    • 2020
  • Biometric technology with the advance of deep learning enabled the new types of content. Especially, face recognition can provide immersion in terms of convenience and non-compulsiveness, but most commercial content has limitations that are limited to application areas. In this paper, we attempted to overcome these limitations, implement content that can utilize face recognition technology based on realtime video feed. We used Unity engine for high quality graphics, but performance degradation and frame drop occurred. To solve them, we augmented Dlib toolkit and adjusted the resolution image.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Improved Real-Time Mean-Shift Face Tracking by Readjusting Detected Face Region Histogram (검출된 얼굴 영역 히스토그램 재조정을 통한 개선된 실시간 평균이동 얼굴 추적 방식)

  • Kim, Gui-sik;Lee, Jae-sung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.195-198
    • /
    • 2013
  • Recognition and Tracking of interesting object is the significant field in Computer Vision. Mean-Shift algorithm have chronic problems that some errors are occurred when histogram of tracking area is similar to another area. in this paper, we propose to solve the problem. Each algorithm blocks skin color filtering, face detect and Mean-Shift started consecutive order assists better operation of the next algorithm. Avoid to operations of the overhead of tracking area similar to a histogram distribution areas overlap only consider the number of white pixels by running the Viola-Jones algorithm, simple arithmetic increases the convergence of the Mean-Shift. The experimental results, it comes to 78% or more of white pixels in the Mean-Shift search area, only if the recognition of the face area when it is configured to perform a Viola-Jones algorithm is tracking the object, was 100 percent successful.

  • PDF

The Suggestion of LINF Algorithm for a Real-time Face Recognition System (실시간 얼굴인식 시스템을 위한 새로운 LINF 알고리즘의 제안)

  • Jang Hye-Kyoung;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.79-86
    • /
    • 2005
  • In this paper, we propose a new LINF(Linear Independent Non-negative Factorization) algorithm for real-time face recognition systea This system greatly consists of the two parts: 1) face extraction part; 2) face recognition part. In the face extraction Part we applied subtraction image, the detection of eye and mouth region , and normalization method, and then in the face recognition Part we used LINF in extracted face candidate region images. The existing recognition system using only PCA(Principal Component Analysis) showed low recognition rates, and it was hard in the recognition system using only LDA(Linear Discriminants Analysis) to apply LDA directly when the training set is small. To overcome these shortcomings, we reduced dimension as the matrix that had non-negative value to be different from former eigenfaces and then applied LDA to the matrix in the proposed system We have experimented using self-organized DAIJFace database and ORL database offered by AT(')T laboratory in Cambridge, U.K. to evaluate the performance of the proposed system. The experimental results showed that the proposed method outperformed PCA, LDA, ICA(Independent Component Analysis) and PLMA(PCA-based LDA mixture algorithm) method within the framework of recognition accuracy.

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.

Real Time Face Detection in Video Using Progressive Thresholding (순차 임계 설정법을 이용한 비디오에서의 실시간 얼굴검출)

  • Ye Soo-Young;Lee Seon-Bong;Kum Dae-Hyun;Kim Hyo-Sung;Nam Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.3
    • /
    • pp.95-101
    • /
    • 2006
  • A face detection plays an important role in face recognition, video surveillance, and human computer interaction. In this paper, we propose a progressive threshold method to detect human faces in real time. The consecutive face images are acquired from camera and transformed into YCbCr color space images. The skin color of the input images are separated using a skin color filter in the YCbCr color space and some candidated face areas are decided by connected component analysis. The intensity equalization is performed to avoid the effect of many circumstances and an arbitrary threshold value is applied to get binary images. The eye area can be detected because the area is clearly distinguished from others in the binary image progressive threshold method searches for an optimal eye area by progressively increasing threshold from low values. After progressive thresholding, the eye area is normalized and verified by back propagation algorithm to finalize the face detection.

  • PDF