• Title/Summary/Keyword: face translation

Search Result 50, Processing Time 0.027 seconds

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

Eating Self-Efficacy: Development of a Korean Version of the Weight Efficacy Life-Style Questionnaire - A Cross-Cultural Translation and Face-Validity Study (식이 자기 효능감: 한국어판 Weight Efficacy Life-Style 설문지 개발 - 횡문화적 번역 및 안면 타당도 검증)

  • Seo, Hee-Yeon;Ok, Ji-Myung;Kim, Seo-Young;Lim, Young-Woo;Park, Young-Bae
    • Journal of Korean Medicine for Obesity Research
    • /
    • v.19 no.1
    • /
    • pp.24-30
    • /
    • 2019
  • Objectives: Eating self-efficacy is an important predictor of successful weight control behaviors during obesity treatment. The Weight Efficacy Life-Style Questionnaire (WEL) is an internationally used measure of eating self-efficacy. The objective of this study was to develop the Korean version of WEL (K-WEL) and verify face validity. Methods: According to previously published guidelines, the cross-cultural translation was conducted through organizing the expert committee, translation, back-translation, synthesis, grammar review, and final synthesis. Following the translation of the WEL into Korean, face validity was performed for 35 subjects. Results: After all the versions of the questionnaire were examined, the translated WEL questionnaires were finally synthesized and licensed by the developer in writing. Regarding the translated WEL questionnaires, seven out of 35 subjects (20%) offered ideas about ambiguous expressions in them. All four points indicated in the face validity verification were additionally modified for greater clarity and understanding. Conclusions: We developed the Korean version of WEL and completed face validity. In future research, it would be necessary to provide further study on the reliability and validity of the Korean version of WEL.

Face detection using haar-like feature and Tracking with Lucas-Kanade feature tracker (Haar-like feature를 이용한 얼굴 검출과 추적을 위한 Lucas-Kanade특징 추적)

  • Kim, Ki-Sang;Kim, Se-Hoon;Park, Gene-Yong;Choi, Hyung-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.835-838
    • /
    • 2008
  • In this paper, we present automatic face detection and tracking which is robustness in rotation and translation. Detecting a face image, we used Haar-like feature, which is fast detect facial image. Also tracking, we applied Lucas-Kanade feature tracker and KLT algorithm, which has robustness for rotated facial image. In experiment result, we confirmed that face detection and tracking which is robustness in rotation and translation.

  • PDF

Facial Image Synthesis by Controlling Skin Microelements (피부 미세요소 조절을 통한 얼굴 영상 합성)

  • Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.369-377
    • /
    • 2022
  • Recent deep learning-based face synthesis research shows the result of generating a realistic face including overall style or elements such as hair, glasses, and makeup. However, previous methods cannot create a face at a very detailed level, such as the microstructure of the skin. In this paper, to overcome this limitation, we propose a technique for synthesizing a more realistic facial image from a single face label image by controlling the types and intensity of skin microelements. The proposed technique uses Pix2PixHD, an Image-to-Image Translation method, to convert a label image showing the facial region and skin elements such as wrinkles, pores, and redness to create a facial image with added microelements. Experimental results show that it is possible to create various realistic face images reflecting fine skin elements corresponding to this by generating various label images with adjusted skin element regions.

Registration Error Compensation for Face Recognition Using Eigenface (Eigenface를 이용한 얼굴인식에서의 영상등록 오차 보정)

  • Moon Ji-Hye;Lee Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.5C
    • /
    • pp.364-370
    • /
    • 2005
  • The first step of face recognition is to align an input face picture with database images. We propose a new algorithm of removing registration error in eigenspace. Our algorithm can correct for translation, rotation and scale changes. Linear matrix modeling of registration error enables us to compensate for subpixel errors in eigenspace. After calculating derivative of a weighting vector in eigenspace we can obtain the amount of translation or rotation without time consuming search. We verify that the correction enhances the recognition rate dramatically.

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Face Recognition Robust to Brightness, Contrast, Scale, Rotation and Translation (밝기, 명암도, 크기, 회전, 위치 변화에 강인한 얼굴 인식)

  • 이형지;정재호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.149-156
    • /
    • 2003
  • This paper proposes a face recognition method based on modified Otsu binarization, Hu moment and linear discriminant analysis (LDA). Proposed method is robust to brightness, contrast, scale, rotation, and translation changes. Modified Otsu binarization can make binary images that have the invariant characteristic in brightness and contrast changes. From edge and multi-level binary images obtained by the threshold method, we compute the 17 dimensional Hu moment and then extract feature vector using LDA algorithm. Especially, our face recognition system is robust to scale, rotation, and translation changes because of using Hu moment. Experimental results showed that our method had almost a superior performance compared with the conventional well-known principal component analysis (PCA) and the method combined PCA and LDA in the perspective of brightness, contrast, scale, rotation, and translation changes with Olivetti Research Laboratory (ORL) database and the AR database.

A Study on Face Recognition Based on Modified Otsu's Binarization and Hu Moment (변형 Otsu 이진화와 Hu 모멘트에 기반한 얼굴 인식에 관한 연구)

  • 이형지;정재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11C
    • /
    • pp.1140-1151
    • /
    • 2003
  • This paper proposes a face recognition method based on modified Otsu's binarization and Hu moment. Proposed method is robust to brightness, contrast, scale, rotation, and translation changes. As the proposed modified Otsu's binarization computes other thresholds from conventional Otsu's binarization, namely we create two binary images, we can extract higher dimensional feature vector. Here the feature vector has properties of robustness to brightness and contrast changes because the proposed method is based on Otsu's binarization. And our face recognition system is robust to scale, rotation, and translation changes because of using Hu moment. In the perspective of brightness, contrast, scale, rotation, and translation changes, experimental results with Olivetti Research Laboratory (ORL) database and the AR database showed that average recognition rates of conventional well-known principal component analysis (PCA) are 93.2% and 81.4%, respectively. Meanwhile, the proposed method for the same databases has superior performance of the average recognition rates of 93.2% and 81.4%, respectively.

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

Illumination Robust Face Recognition using Ridge Regressive Bilinear Models (Ridge Regressive Bilinear Model을 이용한 조명 변화에 강인한 얼굴 인식)

  • Shin, Dong-Su;Kim, Dai-Jin;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.70-78
    • /
    • 2007
  • The performance of face recognition is greatly affected by the illumination effect because intra-person variation under different lighting conditions can be much bigger than the inter-person variation. In this paper, we propose an illumination robust face recognition by separating identity factor and illumination factor using the symmetric bilinear models. The translation procedure in the bilinear model requires a repetitive computation of matrix inverse operation to reach the identity and illumination factors. Sometimes, this computation may result in a nonconvergent case when the observation has an noisy information. To alleviate this situation, we suggest a ridge regressive bilinear model that combines the ridge regression into the bilinear model. This combination provides some advantages: it makes the bilinear model more stable by shrinking the range of identity and illumination factors appropriately, and it improves the recognition performance by reducing the insignificant factors effectively. Experiment results show that the ridge regressive bilinear model outperforms significantly other existing methods such as the eigenface, quotient image, and the bilinear model in terms of the recognition rate under a variety of illuminations.