• 제목/요약/키워드: Facial Model

검색결과 519건 처리시간 0.031초

Adaptive Particle Filter와 Active Appearance Model을 이용한 얼굴 특징 추적 (Facial Feature Tracking Using Adaptive Particle Filter and Active Appearance Model)

  • 조덕현;이상훈;서일홍
    • 로봇학회논문지
    • /
    • 제8권2호
    • /
    • pp.104-115
    • /
    • 2013
  • For natural human-robot interaction, we need to know location and shape of facial feature in real environment. In order to track facial feature robustly, we can use the method combining particle filter and active appearance model. However, processing speed of this method is too slow. In this paper, we propose two ideas to improve efficiency of this method. The first idea is changing the number of particles situationally. And the second idea is switching the prediction model situationally. Experimental results is presented to show that the proposed method is about three times faster than the method combining particle filter and active appearance model, whereas the performance of the proposed method is maintained.

Multiple Active Appearance Model을 이용한 얼굴 특징 추출 기법 (Facial Feature Extraction using Multiple Active Appearance Model)

  • 박현준;김광백;차의영
    • 한국전자통신학회논문지
    • /
    • 제8권8호
    • /
    • pp.1201-1206
    • /
    • 2013
  • 영상에서 얼굴 및 얼굴 특징을 추출하기 위한 기법으로 active appearance model(AAM)이 있다. 본 논문에서는 두 개의 AAM을 이용하여 얼굴 특징을 추출하는 multiple active appearance model(MAAM) 기법을 제안한다. 두 개의 AAM은 학습 데이터에 대한 파라미터를 조절하여 상반되는 장단점을 가지도록 생성하고, 서로의 단점을 보완할 수 있도록 한다. 제안된 방법의 성능을 평가하기 위해 100장의 영상에 대해서 얼굴 특징추출 실험을 하였다. 실험 결과 기존의 AAM 하나만을 사용하는 기법에 비해 적은 횟수의 피팅만으로도 정확도 높은 결과를 얻을 수 있었다.

Rapid Prototyping 모델을 이용한 골삭제을 위한 외과적 지표;섬유성 골이형성증 치료를 위한 기술적 제안 (SURGICAL INDEX FOR BONE SHAVING USING RAPID PROTOTYPING MODEL;TECHNICAL PROPOSAL FOR TREATMENT OF FIBROUS DYSPLASIA)

  • 김운규
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제23권4호
    • /
    • pp.366-375
    • /
    • 2001
  • Bone shaving for surgical correction is general method in facial asymmetrical patient with fibrous dysplasia. Therefore, decision of bone shaving amount on the preoperative planning is very difficult for improvement of ideal occlusal relationship and harmonious face. Preoperative planning of facial asymmetry with fibrous dysplasia is generally confirmed by the simulation surgery based on evaluation of clinical examination, radiographic analysis and analysis of facial study model. However, the accurate postoperative results can not be predicted by this method. By using the computed tomography based RP(rapid prototyping) model, simulation of facial skeleton can be duplicated and 3-dimensional simmulation surgery can be perfomed. After fabrication of postoperative study model by preoperactive bone shaving, preoperative and postoperactive surgical index was made by omnivaccum and clear acrylic resin. Amount of bone shaving is confirmed by superimposition of surgical index at the operation. We performed the surgical correction of facial asymmetry patients with fibrous dysplasia using surgical index and prototyping model and obtained the favorable results.

  • PDF

MPEG-4 FAP 기반 세분화된 얼굴 근육 모델 구현 (Subdivided Facial Muscle Modeling based on MPEG-4 EAP)

  • 이인서;박운기;전병우
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 제13회 신호처리 합동 학술대회 논문집
    • /
    • pp.631-634
    • /
    • 2000
  • In this paper, we propose a method for implementing a system for decoding the parameter data based on Facial Animation Parameter (FAP) developed by MPEG-4 Synthetic/Natural Hybrid Coding (SNHC) subcommittee. The data is displayed according to FAP with human mucle model animation engine. Proposed model has the basic properties of the human skin specified by be energy funtional for realistic facial animation.

  • PDF

3차원 정서 공간에서 마스코트 형 얼굴 로봇에 적용 가능한 동적 감정 모델 (Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot)

  • 박정우;이희승;조수훈;정명진
    • 로봇학회논문지
    • /
    • 제2권3호
    • /
    • pp.282-287
    • /
    • 2007
  • Humanoid and android robots are emerging as a trend shifts from industrial robot to personal robot. So human-robot interaction will increase. Ultimate objective of humanoid and android would be a robot like a human. In this aspect, implementation of robot's facial expression is necessary in making a human-like robot. This paper proposes a dynamic emotion model for a mascot-type robot to display similar facial and more recognizable expressions.

  • PDF

A Probabilistic Network for Facial Feature Verification

  • Choi, Kyoung-Ho;Yoo, Jae-Joon;Hwang, Tae-Hyun;Park, Jong-Hyun;Lee, Jong-Hoon
    • ETRI Journal
    • /
    • 제25권2호
    • /
    • pp.140-143
    • /
    • 2003
  • In this paper, we present a probabilistic approach to determining whether extracted facial features from a video sequence are appropriate for creating a 3D face model. In our approach, the distance between two feature points selected from the MPEG-4 facial object is defined as a random variable for each node of a probability network. To avoid generating an unnatural or non-realistic 3D face model, automatically extracted 2D facial features from a video sequence are fed into the proposed probabilistic network before a corresponding 3D face model is built. Simulation results show that the proposed probabilistic network can be used as a quality control agent to verify the correctness of extracted facial features.

  • PDF

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.

MobileNet과 TensorFlow.js를 활용한 전이 학습 기반 실시간 얼굴 표정 인식 모델 개발 (Development of a Ream-time Facial Expression Recognition Model using Transfer Learning with MobileNet and TensorFlow.js)

  • 차주호
    • 디지털산업정보학회논문지
    • /
    • 제19권3호
    • /
    • pp.245-251
    • /
    • 2023
  • Facial expression recognition plays a significant role in understanding human emotional states. With the advancement of AI and computer vision technologies, extensive research has been conducted in various fields, including improving customer service, medical diagnosis, and assessing learners' understanding in education. In this study, we develop a model that can infer emotions in real-time from a webcam using transfer learning with TensorFlow.js and MobileNet. While existing studies focus on achieving high accuracy using deep learning models, these models often require substantial resources due to their complex structure and computational demands. Consequently, there is a growing interest in developing lightweight deep learning models and transfer learning methods for restricted environments such as web browsers and edge devices. By employing MobileNet as the base model and performing transfer learning, our study develops a deep learning transfer model utilizing JavaScript-based TensorFlow.js, which can predict emotions in real-time using facial input from a webcam. This transfer model provides a foundation for implementing facial expression recognition in resource-constrained environments such as web and mobile applications, enabling its application in various industries.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

웹기반 3차원 얼굴 모델링 시스템 (Web-based 3D Face Modeling System)

  • 김응곤;송승헌
    • 한국정보통신학회논문지
    • /
    • 제5권3호
    • /
    • pp.427-433
    • /
    • 2001
  • 본 연구에서는 기존의 방법에 비하여 3차원 스캐너나 카메라를 이용하지 않고 비용과 노력을 크게 절감하면서 실감나는 얼굴 모델링을 효율적으로 수행하는 웹 기반 3차원 얼굴 모델링 시스템을 제안한다. 고가의 영상입력장비를 이용하지 않고 정면과 측면사진영상을 이용하여 3차원 얼굴모델을 만들 수 있다. 특정한 플랫폼과 소프트웨어에 독립적으로 웹상에서 얼굴 모델링 서버에 접속함으로써 3차원 얼굴모델을 만들 수 있도록 설계하였다. 얼굴모델러의 3차원 그래픽스 관련 모듈은 개발된 그래픽 라이브러리들의 특징과 편리함을 제공하는 자바 3B API를 이용하여 개발 중이다. 이 얼굴 모델링 시스템은 클라이언트/서버구조로 되어있다. 클라이언트측의 사용자가 본 시스템에 접속하면 자바 애플릿의 얼굴모델러가 실행되며, 사용자는 두 장의 사진을 입력으로 하여 웹브라우저만으로 절차에 따라서 3차원 얼굴 모델을 생성하게 된다.

  • PDF