• Title/Summary/Keyword: 3D facial expression

Search Result 88, Processing Time 0.022 seconds

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.

The Facial Expression Controller for 3D Avatar Animation working on a Smartphone (스마트폰기반 3D 아바타 애니메이션을 위한 다양한 얼굴표정 제어기 응용)

  • Choi, In-Ho;Lee, Sang-Hoon;Park, Sang-Il;Kim, Yong-Guk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.323-325
    • /
    • 2012
  • 스마트폰 기반 3D 아바타를 이용하여 임의의 표정을 합성 및 제어하여 애니메이션 할 수 있는 방법과 응용을 제안한다. 사용될 아바타에 표현되는 임의의 표정 Data Set을 PCA로 처리 후, 인간의 가장 기본적인 6 표정으로 컨트롤러 축을 생성한다. 만들어진 제어기에, 임의의 연속 표정을 유저에 의해 정해진 시간에 생성하여 애니메이션 할 수 있는 방법을 시스템을 제안하다. 빠른 계산을 장점으로 하는 본 제어기는 스마트폰 환경에 맞게 탑재 되었으며, 이 제어기를 활용하여 모델워킹 모션에 다양한 표정을 적용할 수 있는 시스템을 구현하였다.

A Study on 3D Character Animation Production Based on Human Body Anatomy (인체 해부학을 바탕으로 한 3D 캐릭터 애니메이션 제작방법에 관한 연구)

  • 백승만
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.87-94
    • /
    • 2004
  • 3D character animation uses the various entertainment factors such as movie, advertisement, game and cyber idol and occupies an important position in video industry. Although character animation makes various productions and real expressions possible, it is difficult to make character like human body without anatomical understanding of human body. Human body anatomy is the basic knowledge which analyzes physical structure anatomically, gives a lot of helps to make character modeling and make physical movement and facial expression delicately when character animation is produced. Therefore this study examines structure and proportion of human body and focuses on character modeling and animation production based on anatomical understanding of human body.

  • PDF

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Emotion fusion video communication services for real-time avatar matching technology (영상통신 감성융합 서비스를 위한 실시간 아바타 정합기술)

  • Oh, Dong Sik;Kang, Jun Ku;Sin, Min Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.283-288
    • /
    • 2012
  • 3D is the one of the current world in the spotlight as part of the future earnings of the business sector. Existing flat 2D and stereoscopic 3D to change the 3D shape and texture make walking along the dimension of the real world and the virtual reality world by making it feel contemporary reality of coexistence good show. 3D for the interest of the people has been spreading throughout the movie which is based on a 3D Avata. 3D TV market of the current conglomerate of changes in the market pioneer in the 3D market, further leap into the era of the upgrade was. At the same time, however, the modern man of the world, if becoming a necessity in the smartphone craze new innovation in the IT market mobile phone market and also has made. A small computer called a smartphone enough, the ripple velocity and the aftermath of the innovation of the telephone, the Internet as much as to leave many issues. Smartphone smart phone is a mobile phone that can be several functions. The current iPhone, Android. In addition to a large number of Windows Phone smartphones are released. Above the overall prospects of the future and a business service model for 3D facial expression as input avatar virtual 3D character on camera on your smartphone camera to recognize a user's emotional expressions on the face of the person is able to synthetic synthesized avatars in real-time to other mobile phone users matching, transmission, and be able to communicate in real-time sensibility fused video communication services to the development of applications.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Realistic 3D Facial Expression Animation Based on Muscle Model (근육 모델 기반의 자연스러운 3차원 얼굴 표정 애니메이션)

  • Lee, Hye-Jin;Chung, Hyun-Sook;Lee, Yill-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.265-268
    • /
    • 2002
  • 얼굴은 성별, 나이, 인종에 따라 다양한 특징을 가지고 있어서 개개인을 구별하기가 쉽고 내적인 상태를 쉽게 볼 수 있는 중요한 도구로 여겨지고 있다. 본 논문은 얼굴표정 애니메이션을 위한 효과적인 방법으로 실제얼굴의 피부조직 얼굴 근육 등 해부학적 구조에 기반한 근육기반모델링을 이용하는 방법을 소개하고자 한다. 제안하는 시스템의 구성은 얼굴 와이어프레임 구성과 폴리곤 메쉬분할 단계, 얼굴에 필요한 근육을 적용시키는 단계, 근육의 움직임에 따른 얼굴 표정생성단계로 이루어진다. 와이어프레임 구성과 폴리곤 메쉬 분할 단계에서는 얼굴모델을 Water[1]가 제안한 얼굴을 기반으로 하였고, 하나의 폴리곤 메쉬를 4등분으로 분할하여 부드러운 3D 얼굴모델을 보여준다. 다음 단계는 얼굴 표정생성에 필요한 근육을 30 개로 만들어 실제로 표정을 지을 때 많이 쓰는 부위에 적용시킨다. 그 다음으로 표정생성단계는 FACS 에서 제안한 Action Unit 을 조합하고 얼굴표정에 따라 필요한 근육의 강도를 조절하여 더 자연스럽고 실제감 있는 얼굴표정 애니메이션을 보여준다.

  • PDF

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Evaluation of Histograms Local Features and Dimensionality Reduction for 3D Face Verification

  • Ammar, Chouchane;Mebarka, Belahcene;Abdelmalik, Ouamane;Salah, Bourennane
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.468-488
    • /
    • 2016
  • The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four-Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.

Deep Neural Network Architecture for Video - based Facial Expression Recognition (동영상 기반 감정인식을 위한 DNN 구조)

  • Lee, Min Kyu;Choi, Jun Ho;Song, Byung Cheol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.35-37
    • /
    • 2019
  • 최근 딥 러닝의 급격한 발전과 함께 얼굴표정인식 기술이 상당한 진보를 이루었다. 그러나 기존 얼굴표정인식 기법들은 제한된 환경에서 취득한 인위적인 동영상에 대해 주로 개발되었기 때문에 실제 wild 한 환경에서 취득한 동영상에 대해 강인하게 동작하지 않을 수 있다. 이런 문제를 해결하기 위해 3D CNN, 2D CNN 그리고 RNN 의 새로운 결합으로 이루어진 Deep neural network 구조를 제안한다. 제안 네트워크는 주어진 동영상으로부터 두 가지 서로 다른 CNN 을 통해서 영상 내 공간적 정보뿐만 아니라 시간적 정보를 담고 있는 특징 벡터를 추출할 수 있다. 그 다음, RNN 이 시간 도메인 학습을 수행할 뿐만 아니라 상기 네트워크들에서 추출된 특징 벡터들을 융합한다. 상기 기술들이 유기적으로 연동하는 제안된 네트워크는 대표적인 wild 한 공인 데이터세트인 AFEW 로 실험한 결과 49.6%의 정확도로 종래 기법 대비 향상된 성능을 보인다.

  • PDF