• Title/Summary/Keyword: 얼굴 동영상 합성

Search Result 15, Processing Time 0.021 seconds

Analysis and Synthesis of Facial Expression using Base Faces (기준얼굴을 이용한 얼굴표정 분석 및 합성)

  • Park, Moon-Ho;Ko, Hee-Dong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.827-833
    • /
    • 2000
  • Facial expression is an effective tool to express human emotion. In this paper, a facial expression analysis method based on the base faces and their blending ratio is proposed. The seven base faces were chosen as axes describing and analyzing arbitrary facial expression. We set up seven facial expressions such as, surprise, fear, anger, disgust, happiness, sadness, and expressionless as base faces. Facial expression was built by fitting generic 3D facial model to facial image. Two comparable methods, Genetic Algorithms and Simulated Annealing were used to search the blending ratio of base faces. The usefulness of the proposed method for facial expression analysis was proved by the facial expression synthesis results.

  • PDF

A Method of Detection of Deepfake Using Bidirectional Convolutional LSTM (Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법)

  • Lee, Dae-hyeon;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1053-1065
    • /
    • 2020
  • With the recent development of hardware performance and artificial intelligence technology, sophisticated fake videos that are difficult to distinguish with the human's eye are increasing. Face synthesis technology using artificial intelligence is called Deepfake, and anyone with a little programming skill and deep learning knowledge can produce sophisticated fake videos using Deepfake. A number of indiscriminate fake videos has been increased significantly, which may lead to problems such as privacy violations, fake news and fraud. Therefore, it is necessary to detect fake video clips that cannot be discriminated by a human eyes. Thus, in this paper, we propose a deep-fake detection model applied with Bidirectional Convolution LSTM and Attention Module. Unlike LSTM, which considers only the forward sequential procedure, the model proposed in this paper uses the reverse order procedure. The Attention Module is used with a Convolutional neural network model to use the characteristics of each frame for extraction. Experiments have shown that the model proposed has 93.5% accuracy and AUC is up to 50% higher than the results of pre-existing studies.

Word-balloon effects on Video (비디오에 대한 말풍선 효과 합성)

  • Lee, Sun-Young;Lee, In-Kwon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.332-334
    • /
    • 2012
  • 최근 영화나 드라마 같은 미디어 데어터가 폭발적으로 증가하면서, 다양한 언어로 번역된 자막 데이터도 증가하고 있다. 이러한 자막은 대부분 화면 하단이나 우측에 위치가 고정되어 나타내는 방식을 취하고 있다. 그러나 이 방식에는 몇 가지 한계점을 가지고 있다. 자막과 등장인물의 얼굴이 거리가 먼 경우, 시청자의 시선이 분산되어 영상에 집중하기 어렵다는 점과 청각장애를 가진 사람의 경우 자막만으로는 누가 말하고 있는 대사인지 혼동이 온다는 점이다. 본 논문에서는 만화에서 대사를 전달하기 위해 사용하던 말풍선을 동영상의 자막을 나타내는데 사용하는 새로운 자막 시스템을 제안한다. 말풍선을 사용하면 말꼬리로 화자의 위치를 가리키고, 시청자의 시선을 화자의 얼굴 근처에 집중시킴으로써 기존 자막이 갖는 한계점을 개선시킬 수 있다. 본 연구의 결과물을 검증하기 위해 사용자 평가를 실시했고, 기존의 자막 방식에 비해 시선의 안정성과 흥미성, 정확도에서 더 낫다는 결과를 얻을 수 있었다.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality (가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현)

  • 최장석;이기영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF