• Title/Summary/Keyword: 표정 변화

Search Result 251, Processing Time 0.031 seconds

Emotional Expression System Based on Dynamic Emotion Space (동적 감성 공간에 기반한 감성 표현 시스템)

  • Sim Kwee-Bo;Byun Kwang-Sub;Park Chang-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.18-23
    • /
    • 2005
  • It is difficult to define and classify human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. And among them, a remarkable emotion is expressed. This paper proposes a emotional expression algorithm using dynamic emotion space, which give facial expression in similar with vague human emotion. While existing avatar express several predefined emotions from database, our emotion expression system can give unlimited various facial expression by expressing emotion based on dynamically changed emotion space. In order to see whether our system practically give complex and various human expression, we perform real implementation and experiment and verify the efficacy of emotional expression system based on dynamic emotion space.

A Comparative Study of the Flexible Moving Block System and the Fixed Block System in Urban Railway (도시철도에 있어 이동폐색방식과 고정폐색방식의 상호비교 연구)

  • Jeong, Gwangseop;Park, Jeongsoo;Won, Jaimu
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5D
    • /
    • pp.723-730
    • /
    • 2006
  • Recently, The flexible moving block system in train operation has been introduced to the worldwide rail transportation markets. This paper is a comparative study of the conventional fixed block systems effects and the flexible moving block system on train operating time saving. Based on the literature review, the new algorithm is developed. It is to calculate the optimum headway time of the train. The proposed algorithm can overcome some of the existing algorithm problems, such as the limits of the data and unaware of the rail characteristic. The total travel time saving effect has been analyzed by applying the skip stop scheduling system to the each block system. The results of this study indicated that the total travel time is approximately 40% decreased and the schedule velocity is approximately 24% improved when the moving block system is applied. The results of this study could be used as a theoretical basis for the selection of rail signal system in Seoul's subway number 2 line.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

3D Avatar Messenger Using Lip Shape Change for Face model (얼굴 입모양 변화를 이용한 3D Avatar Messenger)

  • Kim, Myoung-Su;Lee, Hyun-Cheol;Kim, Eun-Serk;Hur, Gi-Taek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.225-228
    • /
    • 2005
  • 얼굴표정은 상대방에게 자신의 감정과 생각을 간접적으로 나타내는 중요한 의사소통의 표현이 되며 상대방에게 직접적인 표현방법의 수단이 되기도 한다. 이러한 표현 방법은 컴퓨터를 매개체로 하는 메신저간의 의사 전달에 있어서 얼굴표정을 사용함으로써 상대방의 감정을 문자로만 인식하는 것이 아니라 현재 상대방이 느끼는 내적인 감정까지 인식하여 대화할 수 있다. 본 논문은 3D 메시로 구성된 얼굴 모델을 이용하여 사용자가 입력한 한글 메시지의 한글 음절을 분석 추출 하고, 3D 얼굴 메시에 서 8개의 입술 제어점을 사용하여 입 모양의 변화를 보여주는 3D Avatar 아바타 메신저 시스템을 설계 및 구현 하였다.

  • PDF

Construction of the Facial 3D Textures for Generating Virtual Characters (가상 캐릭터 제작을 위한 얼굴 3D 텍스처의 구성)

  • 최창석;박상운
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.197-200
    • /
    • 2001
  • 본 논문에서는 가상 캐릭터 제작을 위해 표정변화가 가능한 얼굴의 3D텍스처를 구성하는 방법을 2가지로 제안한다. 하나는 3D 스캐너에서 입력한 얼굴 3D 텍스처에 얼굴의 3D 표준 모델을 정합하여 표정변화가 가능하게 하는 방법이다. 이 경우는 얼굴의 3D 텍스처와 함께 정확한 3D 형상 모델을 얻을수는 있으나, 스캐닝 비용이 고가이고, 장비의 이동이 불편하다. 또 하나의 방법은 전후좌우 4매의 2D영상을 통합하여 얼굴의 3D 텍스처를 구성하는 방법이다. 이 방법은 4매의 2D 영상에 3D 형상모델을 정합한 후, 4개의 모델의 높이, 넓이, 깊이를 통합하여, 대체적인 3D 형상모델을 얻고, 4매의 영상을 통합하여 개인 얼굴의 3D 텍스처를 얻게 된다. 이 경우는 2D 얼굴영상을 이용하기 때문에 저가로 널리 이용할 수 있는 방법이다.

  • PDF

Performance Evaluation of Fusion Algorithms Using PCA and LDA for Face Verification (얼굴인증을 위한 PCA와 LDA 융합 알고리즘 구현 잊 성능 비교 분석)

  • 정장현;구은경;강행봉
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.718-720
    • /
    • 2004
  • 얼굴 인증에서 가장 보편적으로 사용되고 있는 주성분 분석(PCA . Principal Component Analysis)은 정면 얼굴과 같은 특징 패턴에 대해서 비교적 높은 성능을 보인다. 인식률을 떨어뜨리지 않으면서 데이터량을 줄일 수 있는 효과가 있어 클래스를 잘 축약하여 표현하기에 유용하다. 하지만 조명이나 표정의 변화에 대해서는 성능을 보장할 수 없다 이를 보완하기 위해 성분이 다른 클래스간의 분리가 수월하도록 선형판별분석(LDA Linear Discriminant Analysis)을 사용한다 LDA는 데이터의 양이 적을 때는 성능이 떨어지는 단점이 있다 그래서 PCA와 LDA를 융합한 기술을 사용하면 더 나은 성능을 얻을 수 있는데 Min, Max, Mean, Append, Majority voting방법 등이 이에 해당된다. 하지만 기존 연구에서는 제한적 데이터베이스에 대한 실험에 그쳐 실험 결과의 객관성이 부족했다. 본 논문에서는 정형화된 환경에서 여러 가지 데이터베이스를 사용해 실험함으로써 Min, Max, Mean 융합 알고리즘의 성능을 비교 분석한다. 융합 알고리즘이 언제나 좋은 성능을 내는 것은 아니지만 얼굴영상에서 조명이나 표정 등이 변화함에 상관없이 일정 수준의 인증율을 보장하고 있다.

  • PDF

Optimization of Effective Malsburg Gabor Wavelet Kernel at Mouth Region for Face Recognition (얼굴인식을 위한 입술영역에 효과적인 말스버그 가보 웨이브렛 커널의 최적화)

  • Yun, Eun-Sil;Rhee, Phill-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.431-434
    • /
    • 2007
  • 얼굴 인식은 생체인식 기술 중 비 강압식이라는 장점으로 인해 각광받고 있는 분야이다. 그러나 얼굴인식은 조명, 표정에 의해 인식 성능이 저하되는 단점이 있다. 그 중 얼굴표정에 많은 영향을 받으며, 잡음이 많은 부분이 입술부분이다. 입술모양의 변화에 따라 가보벡터 추출에 잡음이 포함되기 때문에, 얼굴 인식 성능이 저하되는 현상이 발생됨을 실험을 통해 알 수 있었다. 따라서 본 논문에서는 입술모양의 변화에 따른 잡음을 줄이기 위해 입술영역에 최적화된 말스버그 가보 웨이브렛 커널(Malsburg Gabor Wavelet Kerne)을 제안한다. 각 입술 특징점에 말스 버그 가보 웨이브렛을 적용하여, 추출된 가보벡터를 통계적으로 분석함으로써 잡음을 확인 할 수 있었으며, 잡음을 최소화하기 위해 입술 영역에 적응적인 말스버그 가보 웨이브렛 커널 을 제안하였다. 실험에 사용한 이미지는 1196 FERET Gellery 이미지를 사용하였으며, 얼굴 인식 성능이 향상됨을 알 수 있었다.

Accuracy Analysis of Combined Block Adjustment with GPS/INS Observations Considering Photo Scale (사진축적을 고려한 GPS/INS 항공사진측량 블록조정의 정확도 분석)

  • Lee Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.23 no.3
    • /
    • pp.323-330
    • /
    • 2005
  • More than ten years after the era of GPS-Photogrammetry which could provide us only three projection center of all six exterior orientation parameters, direct georeferencing with GPS/INS is now becoming a standard method for image orientation. Its main advantage is to skip or reduce the indirect ground control process. This paper describes the experimental test results of integrated sensor orientation with a commercial GPS/IMU system to approve its performance in determination of exterior orientation. For this purpose two different imaging blocks were planned and the area was photographed at a large photo scale of 1:5,000 and a medium photo scale of 1:20,000. From these data set a variety of meaningful results was acquired, i.e., the accuracy. potential of exterior orientation from direct georeferencing and combined block adjustment using these data considering different photo scales and conditions.