• Title/Summary/Keyword: facial feature point

Search Result 60, Processing Time 0.028 seconds

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Automatic Extraction of the Facial Feature Points Using Moving Color (색상 움직임을 이용한 얼굴 특징점 자동 추출)

  • Kim, Nam-Ho;Kim, Hyoung-Gon;Ko, Sung-Jea
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.55-67
    • /
    • 1998
  • This paper presents an automatic facial feature point extraction algorithm in sequential color images. To extract facial region in the video sequence, a moving color detection technique is proposed that emphasize moving skin color region by applying motion detection algorithm on the skin-color transformed images. The threshold value for the pixel difference detection is also decided according to the transformed pixel value that represents the probability of the desired color information. Eye candidate regions are selected using both of the black/white color information inside the skin-color region and the valley information of the moving skin region detected using morphological operators. Eye region is finally decided by the geometrical relationship of the eyes and color histogram. To decide the exact feature points, the PCA(Principal Component Analysis) is used on each eye and mouth regions. Experimental results show that the feature points of eye and mouth can be obtained correctly irrespective of background, direction and size of face.

  • PDF

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Gang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tae;Park, Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.401-404
    • /
    • 2007
  • 본 논문은 거울 투영을 이용하여 2D의 감정인식 데이터베이스를 3D에 적용 가능하다는 것을 증명한다. 또한, 감정 확률을 이용하여 퍼지 모델링을 기반으로한 얼굴표정을 생성하고, 표정을 움직이는 3가지 기본 움직임에 대한 퍼지이론을 적용하여 얼굴표현함수를 제안한다. 제안된 방법은 거울 투영을 통한 다중 이미지를 이용하여 2D에서 사용되는 감정인식에 대한 특징벡터를 3D에 적용한다. 이로 인해, 2D의 모델링 대상이 되는 실제 모델의 기본감정에 대한 비선형적인 얼굴표정을 퍼지를 기반으로 모델링한다. 그리고 얼굴표정을 표현하는데 기본 감정 6가지인 행복, 슬픔, 혐오, 화남, 놀람, 무서움으로 표현되며 기본 감정의 확률에 대해서 각 감정의 평균값을 사용하고, 6가지 감정 확률을 이용하여 동적 얼굴표정을 생성한다. 제안된 방법을 3D 인간형 아바타에 적용하여 실제 모델의 표정 벡터와 비교 분석한다.

  • PDF

Anthropometric Facial Characteristics of Adult Tae-eumin of Northern and Southern Lineage in the Korean Peninsula

  • Kim, Eun-Hee;Cho, Yong-Jin;Jung, Yee-Hong;Seo, Young-Kwang;Kim, Sun-Hyung;Lee, Soo-Kyung;Koh, Byung-Hee;Kim, Dal-Rae
    • The Journal of Korean Medicine
    • /
    • v.30 no.6
    • /
    • pp.86-95
    • /
    • 2009
  • Objectives: This study aimed to examine the difference of external appearance measurements in subjects of different regional lineages as subgroups within the Tae-eumin Sasang grouping. Methods: We chose 51 Tae-eumin subjects diagnosed by Korean Sasang constitutional medical doctors aided by voice analysis. The subjects were divided into two groups, the northern and southern lineages, by an expert on facial characteristics of the two lineages. We took pictures of their frontal and lateral views by Martin's method, measured projected length of face with the Facial Feature Measurement Program, and analyzed anthropometric facial differences between the northern and southern types. Results: Results show differences between the northern and southern types. First, the northern type of face has bigger measurements than the southern type on the frontal face. Second, the northern type of face has higher measurements of "height", which means distance from pupil to a specific measurement point, than the southern type on the frontal face. Third, on the frontal face, the northern and southern types have differences with respect to eyebrow, point of sellion, and eye. Fourth, on the side face, the northern and southern types have differences in lip, mandible and ear. Conclusions: We found our anthropometric facial measurements of the northern and southern lineages to be in accordance with previous literature. Knowledge of the differences between the northern and southern lineages can be a hint in constitutional diagnosis when differentiation is clinically confusing.

  • PDF

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

Automatic Generation of Rule-based Caricature Image (규칙 기반 캐리커쳐 자동 생성 기법)

  • Lee, Eun-Jung;Kwon, Ji-Yong;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.4
    • /
    • pp.17-22
    • /
    • 2006
  • We present the technique that automatically generates caricatures from input face images. We get the mean-shape of training images and extract input image's feature point using AAM(Active Appearance Model). From literature of caricature artists, we define exaggeration rules and apply our rules to input feature points, then we can get exaggerated feature points. To change our results into cartoon-like images, we apply some cartoon-stylizing method to input image and combine it with facial sketch. The input image is warped to exaggerated feature point for final results. Our method can automatically generate a caricature image while it minimizes user interaction.

  • PDF

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Three-dimensional Face Recognition based on Feature Points Compression and Expansion

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Park, Sang-min;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk;Son, Byounghee
    • Journal of Multimedia Information System
    • /
    • v.6 no.2
    • /
    • pp.91-98
    • /
    • 2019
  • Many researchers have attempted to recognize three-dimensional faces using feature points extracted from two-dimensional facial photographs. However, due to the limit of flat photographs, it is very difficult to recognize faces rotated more than 15 degrees from original feature points extracted from the photographs. As such, it is difficult to create an algorithm to recognize faces in multiple angles. In this paper, it is proposed a new algorithm to recognize three-dimensional face recognition based on feature points extracted from a flat photograph. This method divides into six feature point vector zones on the face. Then, the vector value is compressed and expanded according to the rotation angle of the face to recognize the feature points of the face in a three-dimensional form. For this purpose, the average of the compressibility and the expansion rate of the face data of 100 persons by angle and face zone were obtained, and the face angle was estimated by calculating the distance between the middle of the forehead and the tail of the eye. As a result, very improved recognition performance was obtained at 30 degrees of rotated face angle.

A Study of Face Feature Tracking and Moving Measure Devices (얼굴 특징점 추적 및 움직임 측정도구)

  • Lee, Jeong-Hee;Lee, Young-Hee;Cha, Eui-Young
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.5
    • /
    • pp.295-302
    • /
    • 2011
  • This paper proposes facial feature tracking based on modified ART2 neural networks. And we also suggest new measurement devices such as 'Persistence Exponent' and 'Moving Space Exponent' for the criterion of input vector which consists features. The proposed methods have been applied to classify 48 students by 2-class (ADHD positive, ADHD negative). The results of the experiment have shown that the proposed methods are effective for ADHD Behavior Pattern Classification based on the Image Processing.