• Title/Summary/Keyword: Facial image

Search Result 824, Processing Time 0.029 seconds

A Study on the Improvement of the Facial Image Recognition by Extraction of Tilted Angle (기울기 검출에 의한 얼굴영상의 인식의 개선에 관한 연구)

  • 이지범;이호준;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.935-943
    • /
    • 1993
  • In this paper, robust recognition system for tilted facial image was developed. At first, standard facial image and lilted facial image are captured by CCTV camera and then transformed into binary image. The binary image is processed in order to obtain contour image by Laplacian edge operator. We trace and delete outermost edge line and use inner contour lines. We label four inner contour lines in order among the inner lines, and then we extract left and right eye with known distance relationship and with two eyes coordinates, and calculate slope information. At last, we rotate the tilted image in accordance with slope information and then calculate the ten distance features between element and element. In order to make the system invariant to image scale, we normalize these features with distance between left and righ eye. Experimental results show 88% recognition rate for twenty five face images when tilted degree is considered and 60% recognition rate when tilted degree is not considered.

  • PDF

Convergence of facial image efficacy on job satisfaction of SME workers (중소기업 직장인의 얼굴이미지효능감이 직무만족에 미치는 융합연구)

  • Kim, Jeoung-Yeoul
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.2
    • /
    • pp.227-232
    • /
    • 2018
  • The purpose of this study is to investigate the relationship between facial image efficacy and job satisfaction in a sample of 167 workers in SMEs in Seoul and Gyeonggi province. and to examine the effect of SME workers' face image efficacy on job satisfaction. The results of the study are as follows. First, there was a statistically significant correlation between management ability and job satisfaction, which is a sub - area of facial image efficacy. This means that the higher the management ability, the higher the degree of job satisfaction. There was a statistically significant correlation between perception attitude and job satisfaction, which is a sub - area of face image efficacy. Second, management ability, which is a sub - area of face image efficacy of SME workers, has a statistically significant effect on job satisfaction. The sub - domain of face image efficacy has no effect on job satisfaction, Expression confidence, which is a sub-domain of image efficacy, has a statistically significant effect on job satisfaction.

Facial Feature Extraction using Genetic Algorithm from Original Image (배경영상에서 유전자 알고리즘을 이용한 얼굴의 각 부위 추출)

  • 이형우;이상진;박석일;민홍기;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.214-217
    • /
    • 2000
  • Many researches have been performed for human recognition and coding schemes recently. For this situation, we propose an automatic facial feature extraction algorithm. There are two main steps: the face region evaluation from original background image such as office, and the facial feature extraction from the evaluated face region. In the face evaluation, Genetic Algorithm is adopted to search face region in background easily such as office and household in the first step, and Template Matching Method is used to extract the facial feature in the second step. We can extract facial feature more fast and exact by using over the proposed Algorithm.

  • PDF

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Glasses Removal from Facial Images with Recursive PCA Reconstruction (반복적인 PCA 재구성을 이용한 얼굴 영상에서의 안경 제거)

  • 오유화;안상철;김형곤;김익재;이성환
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.35-49
    • /
    • 2004
  • This paper proposes a new glasses removal method from color frontal facial image to generate gray glassless facial image. The proposed method is based on recursive PCA reconstruction. For the generation of glassless images, the occluded region by glasses should be found, and a good reconstructed image to compensate with should be obtained. The recursive PCA reconstruction Provides us with both of them simultaneously, and finally produces glassless facial images. This paper shows the effectiveness of the proposed method by some experimental results. We believe that this method can be applied to removing other type of occlusion than the glasses with some modification and enhancing the performance of a face recognition system.

Hybrid Neural Classifier Combined with H-ART2 and F-LVQ for Face Recognition

  • Kim, Do-Hyeon;Cha, Eui-Young;Kim, Kwang-Baek
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1287-1292
    • /
    • 2005
  • This paper presents an effective pattern classification model by designing an artificial neural network based pattern classifiers for face recognition. First, a RGB image inputted from a frame grabber is converted into a HSV image which is similar to the human beings' vision system. Then, the coarse facial region is extracted using the hue(H) and saturation(S) components except intensity(V) component which is sensitive to the environmental illumination. Next, the fine facial region extraction process is performed by matching with the edge and gray based templates. To make a light-invariant and qualified facial image, histogram equalization and intensity compensation processing using illumination plane are performed. The finally extracted and enhanced facial images are used for training the pattern classification models. The proposed H-ART2 model which has the hierarchical ART2 layers and F-LVQ model which is optimized by fuzzy membership make it possible to classify facial patterns by optimizing relations of clusters and searching clustered reference patterns effectively. Experimental results show that the proposed face recognition system is as good as the SVM model which is famous for face recognition field in recognition rate and even better in classification speed. Moreover high recognition rate could be acquired by combining the proposed neural classification models.

  • PDF

A COMPARATIVE STUDY OF THREE DIMENSIONAL RECONSTRUCTIVE IMAGES USING COMPUTED TOMOGRAMS OF FACIAL BONE INJURIES (안면골 외상환자의 전산화단층상을 이용한 삼차원재구성상의 비교연구)

  • Choi Eun-Suk;Koh Kwang-Joon
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.24 no.2
    • /
    • pp.413-423
    • /
    • 1994
  • The purpose of this study was to clarify the spatial relationship in presurgical examination and to aid surgical planning and postoperative evaluation of patients with facial bone injury. For this study, three-dimensional images of facial bone fracture were reconstructed by computed image analysis system and three-dimensional reconstructive program integrated in computed tomography. The obtained results were as follows: 1. Serial conventional computed tomograms were value in accurately depicting the facial bone injuries and three-dimensional reconstructive images demonstrated an overall look. 2. The degree of deterioration of spatial resolution was proportional to the thickness of the slice. 3. Facial bone fractures were the most distinctly demonstrated on inferoanterior views of three-dimensional reconstructive images. 4. Although three-dimensional reconstructive images made diagnosis of fracture lines, it was difficult to identify maxillary fractures. 5. The diagnosis of zygomatic fractures could be made equally well with computed image analysis system and three-dimensional reconstructive program integrated in computed tomography. 6. The diagnosis of mandibular fractures could be made equally well with computed image analysis system and three-dimensional reconstructive program integrated in computed tomography.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.