• Title/Summary/Keyword: 3D Face Model

Search Result 275, Processing Time 0.024 seconds

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Computer Vision Platform Design with MEAN Stack Basis (MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계)

  • Hong, Seonhack;Cho, Kyungsoon;Yun, Jinseob
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

3D Face Creation Model Design Using 2D Image (2D 이미지를 이용한 3D 얼굴생성 모델 설계)

  • Youn, Jae-Hong;Lee, Hyun-Chul;Hur, Gi-Tak
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.171-174
    • /
    • 2002
  • 전 세계적으로 3D 캐릭터는 애니메이션분야에서도 새로운 분야로 특화되고 있다. 특정한 응용 분야에 정확성을 지닌 얼굴 형태와 기능을 표현하는 수학적 추상화(Abstraction) 개념인 Face모델은 다른 분야 보다 훨씬 어렵고 복잡하지만 그 특성상 사용자의 관심이 더욱 높아져 가고 있다. 본 논문에서는 한 장의 2D 이미지를 이용하여 실물과 유사한 3D 얼굴 생성 모델을 설계하고, 이를 이용하여 자연스러운 얼굴의 표정 생성을 위한 애니메이션 방법을 설계하였다.

  • PDF

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

Development of a 2D isoparametric finite element model based on the layerwise approach for the bending analysis of sandwich plates

  • Belarbia, Mohamed-Ouejdi;Tatib, Abdelouahab;Ounisc, Houdayfa;Benchabane, Adel
    • Structural Engineering and Mechanics
    • /
    • v.57 no.3
    • /
    • pp.473-506
    • /
    • 2016
  • The aim of this work is the development of a 2D quadrilateral isoparametric finite element model, based on a layerwise approach, for the bending analysis of sandwich plates. The face sheets and the core are modeled individually using, respectively, the first order shear deformation theory and the third-order plate theory. The displacement continuity condition at the interfaces 'face sheets-core' is satisfied. The assumed natural strains method is introduced to avoid an eventual shear locking phenomenon. The developed element is a four-nodded isoparametric element with fifty two degrees-of-freedom (52 DOF). Each face sheet has only two rotational DOF per node and the core has nine DOF per node: six rotational degrees and three translation components which are common for the all sandwich layers. The performance of the proposed element model is assessed by six examples, considering symmetric/unsymmetric composite sandwich plates with different aspect ratios, loadings and boundary conditions. The numerical results obtained are compared with the analytical solutions and the numerical results obtained by other authors. The results indicate that the proposed element model is promising in terms of the accuracy and the convergence speed for both thin and thick plates.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Study on Developing a High-Resolution Digital Elevation Model (DEM) of a Tunnel Face (터널 막장면 고해상도 DEM(Digital Elevation Model) 생성에 관한 연구)

  • Kim, Kwang-Yeom;Kim, Chang-Yong;Baek, Seung-Han;Hong, Sung-Wan;Lee, Seung-Do
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2006.03a
    • /
    • pp.931-938
    • /
    • 2006
  • Using high resolution stereoscopic imaging system three digital elevation model of tunnel face is acquired. The images oriented within a given tunnel coordinate system are brought into a stereoscopic vision system enabling three dimensional inspection and evaluation. The possibilities for the prediction ahead and outside of tunnel face have been improved by the digital vision system with 3D model. Interpolated image structures of rock mass between subsequent stereo images will enable to model the rock mass surrounding the opening within a short time at site. The models shall be used as input to numerical simulations on site, comparison of expected and encountered geological conditions, and for the interpretation of geotechnical monitoring results.

  • PDF

Web-based 3D Face Modeling System (웹기반 3차원 얼굴 모델링 시스템)

  • 김응곤;송승헌
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.427-433
    • /
    • 2001
  • This paper proposes a web-based 3 dimensional face modeling system that makes a realistic facial model efficiently without any 30 scanner or camera that uses in the traditional methods. Without expensive image-input equipments, we can easily create 3B models only using front and side images. The system is available to make 3D facial models as we connect to the facial modeling server on the WWW which is independent from specific platforms and softwares. This system will be implemented using Java 3D API, which includes the functions and conveniences of developed graphic libraries. It is a Client/server architecture which consists of user connection module and 3D facial model creating module. Clients connect with the facial modeling server, input two facial photographic images, detects the feature points, and then create a 3D facial model modifying generic facial model with the points according to the procedures using only the web browser.

  • PDF

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.