• Title/Summary/Keyword: 얼굴 합성

Search Result 140, Processing Time 0.022 seconds

A Study on Face Object Detection System using spatial color model (공간적 컬러 모델을 이용한 얼굴 객체 검출 시스템 연구)

  • Baek, Deok-Soo;Byun, Oh-Sung;Baek, Young-Hyun
    • 전자공학회논문지 IE
    • /
    • v.43 no.2
    • /
    • pp.30-38
    • /
    • 2006
  • This paper is used the color space distribution HMMD model presented in MPEG-7 in order to segment and detect the wanted image parts as a real time without the user's manufacturing in the video object segmentation. Here, it is applied the wavelet morphology to remove a small part that is regarded as a noise in image and a part excepting for the face image. Also, it did the optimal composition by the rough set. In this paper, tile proposed video object detection algorithm is confirmed to be superior as detecting the face object exactly than the conventional algorithm by applying those to the different size images.put the of paper here.

Mutual Gaze Correction for Videoconferencing using View Morphing (모핑을 이용한 화상회의의 시선 맞춤 보정 방법)

  • Baek, Eu-Tteum;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 2015
  • Nonverbal communications such as eye gazing, posture, and gestures send forceful messages. In regard to nonverbal communication, eye gazing is one of the most strong forms that an individual can use. However, lack of mutual gazing occurs when we use video conferencing system. The displacement between locations of the eyes and a camera gets in the way of eye contact. The lack of eye gazing can give unapproachable and unpleasant feeling. In this paper, we propose an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. The result shows that eye gazing is corrected and correctly preserved and the image was synthesized seamlessly.

Word-balloon effects on Video (비디오에 대한 말풍선 효과 합성)

  • Lee, Sun-Young;Lee, In-Kwon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.332-334
    • /
    • 2012
  • 최근 영화나 드라마 같은 미디어 데어터가 폭발적으로 증가하면서, 다양한 언어로 번역된 자막 데이터도 증가하고 있다. 이러한 자막은 대부분 화면 하단이나 우측에 위치가 고정되어 나타내는 방식을 취하고 있다. 그러나 이 방식에는 몇 가지 한계점을 가지고 있다. 자막과 등장인물의 얼굴이 거리가 먼 경우, 시청자의 시선이 분산되어 영상에 집중하기 어렵다는 점과 청각장애를 가진 사람의 경우 자막만으로는 누가 말하고 있는 대사인지 혼동이 온다는 점이다. 본 논문에서는 만화에서 대사를 전달하기 위해 사용하던 말풍선을 동영상의 자막을 나타내는데 사용하는 새로운 자막 시스템을 제안한다. 말풍선을 사용하면 말꼬리로 화자의 위치를 가리키고, 시청자의 시선을 화자의 얼굴 근처에 집중시킴으로써 기존 자막이 갖는 한계점을 개선시킬 수 있다. 본 연구의 결과물을 검증하기 위해 사용자 평가를 실시했고, 기존의 자막 방식에 비해 시선의 안정성과 흥미성, 정확도에서 더 낫다는 결과를 얻을 수 있었다.

Measurement-based Face Rendering reflecting Positional Scattering Properties (위치별 산란특성을 반영한 측정기반 얼굴 렌더링)

  • Park, Sun-Yong;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.137-144
    • /
    • 2009
  • This paper predicts 6 facial regions that may have sharply different scattering properties, rendering the face more realistically based on their diffusion profiles. The scattering properties are acquired in the form of high dynamic range by photographing the pattern formed around an unit ray incident on facial skin. The acquired data are fitted to a 'linear combination of Gaussian functions', which well approximates the original diffusion profile of skin and has good characteristics as the filter. During the process, to prevent its solutions from converging into local minima, we take advantage of the genetic algorithm to set up the initial value. Each Gaussian term is applied to the irradiance map as a filter, expressing subsurface scattering effect. In this paper, to efficiently handle the maximum 12 Gaussian filterings, we make use of the parallel capacity of CUDA.

  • PDF

Hardware Implementation of Facial Feature Detection Algorithm (얼굴 특징 검출 알고리즘의 하드웨어 설계)

  • Kim, Jung-Ho;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • In this paper, we designed a facial feature(eyes, a moult and a nose) detection hardware based on the ICT transform which was developed for face detection earlier. Our design used a pipeline architecture for high throughput and it also tried to reduce memory size and memory access rate. The algerian and its hardware implementation were tested on the BioID database, which is a worldwide face detection test bed, and its facial feature detection rate was 100% both in software and hardware, assuming the face boundary was correctly detected. After synthesizing the hardware on Dongbu $0.18{\mu}m$ CMOS library, its die size was $376,821{\mu}m^2$ with the maximum operating clock 78MHz.

The Key Frame Extraction and Anchor Recognition in News Videos (뉴스 비디오에서 키 프레임 추출과 앵커 인식)

  • 신성윤;임정훈;이양원;표성배
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.11a
    • /
    • pp.286-289
    • /
    • 2001
  • 뉴스 비디오에서 앵커가 등장하는 첫 번째 프레임은 하나의 뉴스를 샷으로 설정하는데 기준이 되는 키 프레임이라고 볼 수 있다. 본 논문에서는 뉴스 비디오의 장면 전환을 검출을 위하여 컬러 히스토그램과 $\chi$$^2$ 히스토그램을 합성한 방법을 이용하여 키 프레임을 추출하며, 추출된 키 프레임을 대상으로 앵커 프레임의 공간적 구성과 얼굴의 특징 정보에 대한 사전 지식을 바탕으로 한 유사성 측정을 통하여 앵커를 인식하도록 한다. 앵커로 인식된 프레임은 하나의 뉴스 신에 대한 키 프레임이 되며 뉴스 비디오를 색인화 하는데 중요한 역할을 수행한다.

  • PDF

Face Mask Detection using Neural Network in Real Time Video Surveillance (실시간 영상 기반 신경망을 이용한 마스크 착용 감지 시스템)

  • Go, Geon-Hyeok;Choe, Seong-Jin;Song, Do-Hun;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.208-211
    • /
    • 2021
  • 본 논문에서는 합성곱 신경망을 활용하여 영상에서 마스크 착용 및 미착용 상태를 탐지하는 방법을 제안한다. 코로나바이러스감염증-19(COVID-19)의 유행에 따라 감염 및 확산방지를 위해 마스크 정상적 착용이 요구되는데 몇몇 사람들은 이를 지키지 않고 있으며 현재의 감시 시스템은 입구에서 마스크 착용 여부를 검사하는 방식으로 작동될 뿐 공간에 입장한 다음 착용 여부를 알 수 없다. 제안하는 방법은 합성곱 신경망을 통해 영상에서 얼굴을 탐지하여 얻은 데이터를 이용하여 다수사람들의 마스크 착용 및 미착용 상태를 판별하는 방법으로 설계하였다.

  • PDF

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

The Effect on the Contents of Self-Disclosure Activities using Ubiquitous Home Robots (자기노출 심리를 이용한 유비쿼터스 로봇 콘텐츠의 효과)

  • Kim, Su-Jung;Han, Jeong-Hye
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.1
    • /
    • pp.57-63
    • /
    • 2008
  • This study uses the identification which is one of the critical components of psychological mechanism and enables replacing one's own self because of the needs of self-expression(disclosure) and creation. The study aims to improve educational effects using the realistic by increasing sense of the virtual reality and the attention. After the computer-based contents were developed and converted to be applied into robot, and then the contents were combined the student's photo and the avatar using automatic loading. Finally each one of the contents was applied to the students. The results of the investigation indicated that there were significant effects of the contents based on identification. In other words, the contents effect on student's attention, but not their academic achievement. The study could find the effect of the identification's application using the educational robot. We suggested that improving technical ability of the augmented virtuality as a face-detection and sensitive interaction may lead to the specific suggestions for educational effects for further research.

  • PDF

Compressed Ensemble of Deep Convolutional Neural Networks with Global and Local Facial Features for Improved Face Recognition (얼굴인식 성능 향상을 위한 얼굴 전역 및 지역 특징 기반 앙상블 압축 심층합성곱신경망 모델 제안)

  • Yoon, Kyung Shin;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1019-1029
    • /
    • 2020
  • In this paper, we propose a novel knowledge distillation algorithm to create an compressed deep ensemble network coupled with the combined use of local and global features of face images. In order to transfer the capability of high-level recognition performances of the ensemble deep networks to a single deep network, the probability for class prediction, which is the softmax output of the ensemble network, is used as soft target for training a single deep network. By applying the knowledge distillation algorithm, the local feature informations obtained by training the deep ensemble network using facial subregions of the face image as input are transmitted to a single deep network to create a so-called compressed ensemble DCNN. The experimental results demonstrate that our proposed compressed ensemble deep network can maintain the recognition performance of the complex ensemble deep networks and is superior to the recognition performance of a single deep network. In addition, our proposed method can significantly reduce the storage(memory) space and execution time, compared to the conventional ensemble deep networks developed for face recognition.