• Title/Summary/Keyword: Facial Image Processing

Search Result 158, Processing Time 0.027 seconds

Eye Location Algorithm For Natural Video-Conferencing (화상 회의 인터페이스를 위한 눈 위치 검출)

  • Lee, Jae-Jun;Choi, Jung-Il;Lee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3211-3218
    • /
    • 1997
  • This paper addresses an eye location algorithm which is essential process of human face tracking system for natural video-conferencing. In current video-conferencing systems, user's facial movements are restricted by fixed camera, therefore it is inconvenient to users. We Propose an eye location algorithm for automatic face tracking. Because, locations of other facial features guessed from locations of eye and scale of face in the image can be calculated using inter-ocular distance. Most previous feature extraction methods for face recognition system are approached under assumption that approximative face region or location of each facial feature is known. The proposed algorithm in this paper uses no prior information on the given image. It is not sensitive to backgrounds and lighting conditions. The proposed algorithm uses the valley representation as major information to locate eyes. The experiments have been performed for 213 frames of 17 people and show very encouraging results.

  • PDF

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

Facial Contour Extraction in Moving Pictures by using DCM mask and Initial Curve Interpolation of Snakes (DCM 마스크와 스네이크의 초기곡선 보간에 의한 동영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.58-66
    • /
    • 2006
  • In this paper, we apply DCM(Dilation of Color and Motion information) mask and Active Contour Models(Snakes) to extract facial outline in moving pictures with complex background. First, we propose DCM mask which is made by applying morphology dilation and AND operation to combine facial color and motion information, and use this mask to detect facial region without complex background and to remove noise in image energy. Also, initial curves are automatically set according to rotational degree estimated with geometric ratio of facial elements to overcome the demerit of Active Contour Models which is sensitive to initial curves. And edge intensity and brightness are both used as image energy of snakes to extract contour at parts with weak edges. For experiments, we acquired total 480 frames with various head-poses of sixteen persons with both eyes shown by taking pictures in inner space and also by capturing broadcasting images. As a result, it showed that more elaborate facial contour is extracted at average processing time of 0.28 seconds when using interpolated initial curves according to facial rotation degree and using combined image energy of edge intensity and brightness.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

Face recognition by using independent component analysis (독립 성분 분석을 이용한 얼굴인식)

  • 김종규;장주석;김영일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.48-58
    • /
    • 1998
  • We present a method that can recognize face images using independent component analysis that is used mainly for blind sources separation in signal processing. We assumed that a face image can be expressed as the sum of a set of statistically independent feature images, which was obtained by using independent component analysis. Face recognition was peformed by projecting the input image to the feature image space and then by comparing its projection components with those of stored reference images. We carried out face recognition experiments with a database that consists of various varied face images (total 400 varied facial images collected from 10 per person) and compared the performance of our method with that of the eigenface method based on principal component analysis. The presented method gave better results of recognition rate than the eigenface method did, and showed robustness to the random noise added in the input facial images.

  • PDF

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

3D CT Image Processing for 3D Printed Auricular Reconstruction of Unilateral Microtia Patient

  • Roh, Tae Suk;Yun, In Sik
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.80-82
    • /
    • 2014
  • Purpose Microtia is congenital anomaly of external ear and the reconstruction method for the external ear of microtia patient was based on autogenous costal cartilage framework. The application of 3D printing technique in medical science has made more possibility of human tissue restoration, and we tried to apply this technique in auricular reconstruction field. Materials and Methods As for unilateral microtia patient, the contralateral side ear is normal and reconstructive surgeon tried to mimic it for reconstruction of affected ear. So, we obtained facial CT scan of microtia patient and made mirror image of normal side ear. Moreover, to make the 3D scaffold based on the mirror image of normal ear and to apply this scaffold for the auricular reconstruction surgery, we included auriculocephalic sulcus and anterior fixation part. Results We could successfully obtain mirror image of normal ear, auriculocephalic sulcus and anterior fixation part for 3D scaffold printing. Conclusions Using this CT image processing and 3D printing technique, we will be able to make the scaffold for auricular reconstruction of unilateral microtia patient, and perform auricular reconstruction in near future.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.