• Title/Summary/Keyword: Facial Feature

Search Result 513, Processing Time 0.025 seconds

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

Automatic Face Identification System Using Adaptive Face Region Detection and Facial Feature Vector Classification

  • Kim, Jung-Hoon;Do, Kyeong-Hoon;Lee, Eung-Joo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1252-1255
    • /
    • 2002
  • In this paper, face recognition algorithm, by using skin color information of HSI color coordinate collected from face images, elliptical mask, fratures of face including eyes, nose and mouth, and geometrical feature vectors of face and facial angles, is proposed. The proposed algorithm improved face region extraction efficacy by using HSI information relatively similar to human's visual system along with color tone information about skin colors of face, elliptical mask and intensity information. Moreover, it improved face recognition efficacy with using feature information of eyes, nose and mouth, and Θ1(ACRED), Θ2(AMRED) and Θ 3(ANRED), which are geometrical face angles of face. In the proposed algorithm, it enables exact face reading by using color tone information, elliptical mask, brightness information and structural characteristic angle together, not like using only brightness information in existing algorithm. Moreover, it uses structural related value of characteristics and certain vectors together for the recognition method.

  • PDF

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

Facial Feature Extraction using Nasal Masks from 3D Face Image (코 형상 마스크를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.1-7
    • /
    • 2004
  • This paper proposes a new method for facial feature extraction, and the method could be used to normalize face images for 3D face recognition. 3D images are much less sensitive than intensity images at a source of illumination, so it is possible to recognize people individually. But input face images may have variable poses such as rotating, Panning, and tilting. If these variances ire not considered, incorrect features could be extracted. And then, face recognition system result in bad matching. So it is necessary to normalize an input image in size and orientation. It is general to use geometrical facial features such as nose, eyes, and mouth in face image normalization steps. In particular, nose is the most prominent feature in 3D face image. So this paper describes a nose feature extraction method using 3D nasal masks that are similar to real nasal shape.

Age of Face Classification based on Gabor Feature and Fuzzy Support Vector Machines (Gabor 특징과 FSVM 기반의 연령별 얼굴 분류)

  • Lee, Hyun-Jik;Kim, Yoon-Ho;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.1
    • /
    • pp.151-157
    • /
    • 2012
  • Recently, owing to the technology advances in computer science and image processing, age of face classification have become prevalent topics. It is difficult to estimate age of facial shape with statistical figures because facial shape of the person should change due to not only biological gene but also personal habits. In this paper, we proposed a robust age of face classification method by using Gabor feature and fuzzy support vector machine(SVM). Gabor wavelet function is used for extracting facial feature vector and in order to solve the intrinsic age ambiguity problem, a fuzzy support vector machine(FSVM) is introduced. By utilizing the FSVM age membership functions is defined. Some experiments have conducted to testify the proposed approach and experimental results showed that the proposed method can achieve better age of face classification precision.

Gold Nanoshell-Mediated Photothermal Therapy for Facial Pores

  • Lee, Sang Ju;Jung, Jeanne;Seok, Seung Hui;Kim, Dong Hyun
    • Medical Lasers
    • /
    • v.8 no.2
    • /
    • pp.97-100
    • /
    • 2019
  • Facial pores are a visible topographic feature of skin surfaces and are generally the enlarged openings of pilosebaceous follicles. Enlarged facial pores can be a significant cosmetic problem, particularly for women. Recently, gold nanoshell-mediated photothermal therapy (PTT) has been reported to be effective in treating recurrent acne. The treatment of enlarged facial pores with gold nanoshell-mediated PTT produced excellent results with no side effects. The two cases reported here demonstrate the possibility of gold nanoshell-mediated PTT as a safe and effective treatment for enlarged facial pores.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Sasang Constitution Detection Based on Facial Feature Analysis Using Explainable Artificial Intelligence (설명가능한 인공지능을 활용한 안면 특징 분석 기반 사상체질 검출)

  • Jeongkyun Kim;Ilkoo Ahn;Siwoo Lee
    • Journal of Sasang Constitutional Medicine
    • /
    • v.36 no.2
    • /
    • pp.39-48
    • /
    • 2024
  • Objectives The aim was to develop a method for detecting Sasang constitution based on the ratio of facial landmarks and provide an objective and reliable tool for Sasang constitution classification. Methods Facial images, KS-15 scores, and certainty scores were collected from subjects identified by Korean Medicine Data Center. Facial ratio landmarks were detected, yielding 2279 facial ratio features. Tree-based models were trained to classify Sasang constitution, and Shapley Additive Explanations (SHAP) analysis was employed to identify important facial features. Additionally, Body Mass Index (BMI) and personality questionnaire were incorporated as supplementary information to enhance model performance. Results Using the Tree-based models, the accuracy for classifying Taeeum, Soeum, and Soyang constitutions was 81.90%, 90.49%, and 81.90% respectively. SHAP analysis revealed important facial features, while the inclusion of BMI and personality questionnaire improved model performance. This demonstrates that facial ratio-based Sasang constitution analysis yields effective and accurate classification results. Conclusions Facial ratio-based Sasang constitution analysis provides rapid and objective results compared to traditional methods. This approach holds promise for enhancing personalized medicine in Korean traditional medicine.

Face Recognition Network using gradCAM (gradCam을 사용한 얼굴인식 신경망)

  • Chan Hyung Baek;Kwon Jihun;Ho Yub Jung
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.9-14
    • /
    • 2023
  • In this paper, we proposed a face recognition network which attempts to use more facial features awhile using smaller number of training sets. When combining the neural network together for face recognition, we want to use networks that use different part of the facial features. However, the network training chooses randomly where these facial features are obtained. Other hand, the judgment basis of the network model can be expressed as a saliency map through gradCAM. Therefore, in this paper, we use gradCAM to visualize where the trained face recognition model has made a observations and recognition judgments. Thus, the network combination can be constructed based on the different facial features used. Using this approach, we trained a network for small face recognition problem. In an simple toy face recognition example, the recognition network used in this paper improves the accuracy by 1.79% and reduces the equal error rate (EER) by 0.01788 compared to the conventional approach.