• 제목/요약/키워드: Facial Images

검색결과 626건 처리시간 0.031초

Can a spontaneous smile invalidate facial identification by photo-anthropometry?

  • Pinto, Paulo Henrique Viana;Rodrigues, Caio Henrique Pinke;Rozatto, Juliana Rodrigues;da Silva, Ana Maria Bettoni Rodrigues;Bruni, Aline Thais;da Silva, Marco Antonio Moreira Rodrigues;da Silva, Ricardo Henrique Alves
    • Imaging Science in Dentistry
    • /
    • 제51권3호
    • /
    • pp.279-290
    • /
    • 2021
  • Purpose: Using images in the facial image comparison process poses a challenge for forensic experts due to limitations such as the presence of facial expressions. The aims of this study were to analyze how morphometric changes in the face during a spontaneous smile influence the facial image comparison process and to evaluate the reproducibility of measurements obtained by digital stereophotogrammetry in these situations. Materials and Methods: Three examiners used digital stereophotogrammetry to obtain 3-dimensional images of the faces of 10 female participants(aged between 23 and 45 years). Photographs of the participants' faces were captured with their faces at rest (group 1) and with a spontaneous smile (group 2), resulting in a total of 60 3-dimensional images. The digital stereophotogrammetry device obtained the images with a 3.5-ms capture time, which prevented undesirable movements of the participants. Linear measurements between facial landmarks were made, in units of millimeters, and the data were subjected to multivariate and univariate statistical analyses using Pirouette® version 4.5 (InfoMetrix Inc., Woodinville, WA, USA) and Microsoft Excel® (Microsoft Corp., Redmond, WA, USA), respectively. Results: The measurements that most strongly influenced the separation of the groups were related to the labial/buccal region. In general, the data showed low standard deviations, which differed by less than 10% from the measured mean values, demonstrating that the digital stereophotogrammetry technique was reproducible. Conclusion: The impact of spontaneous smiles on the facial image comparison process should be considered, and digital stereophotogrammetry provided good reproducibility.

감정 분류를 이용한 표정 연습 보조 인공지능 (Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification)

  • 김동규;이소화;봉재환
    • 한국전자통신학회논문지
    • /
    • 제17권6호
    • /
    • pp.1137-1144
    • /
    • 2022
  • 본 연구에서는 감정을 표현하기 위한 표정 연습을 보조하는 인공지능을 개발하였다. 개발한 인공지능은 서술형 문장과 표정 이미지로 구성된 멀티모달 입력을 심층신경망에 사용하고 서술형 문장에서 예측되는 감정과 표정 이미지에서 예측되는 감정 사이의 유사도를 계산하여 출력하였다. 사용자는 서술형 문장으로 주어진 상황에 맞게 표정을 연습하고 인공지능은 서술형 문장과 사용자의 표정 사이의 유사도를 수치로 출력하여 피드백한다. 표정 이미지에서 감정을 예측하기 위해 ResNet34 구조를 사용하였으며 FER2013 공공데이터를 이용해 훈련하였다. 자연어인 서술형 문장에서 감정을 예측하기 위해 KoBERT 모델을 전이학습 하였으며 AIHub의 감정 분류를 위한 대화 음성 데이터 세트를 사용해 훈련하였다. 표정 이미지에서 감정을 예측하는 심층신경망은 65% 정확도를 달성하여 사람 수준의 감정 분류 능력을 보여주었다. 서술형 문장에서 감정을 예측하는 심층신경망은 90% 정확도를 달성하였다. 감정표현에 문제가 없는 일반인이 개발한 인공지능을 이용해 표정 연습 실험을 수행하여 개발한 인공지능의 성능을 검증하였다.

유전자 알고리즘에 의한 얼굴인식성능의 향상 방안 (The Improving Method of Facial Recognition Using the Genetic Algorithm)

  • 배경율
    • 지능정보연구
    • /
    • 제11권1호
    • /
    • pp.95-105
    • /
    • 2005
  • 얼굴인식을 이용해 출입을 통제하는 보안 시스템에 있어서 얼굴인식성능은 인증 대상의 변화 (표정, 헤어스타일, 나이, 화장)에 커다란 영향을 받는다. 이처럼 수시로 변화하는 환경 변화를 보완하기 위하여 일반적인 얼굴인식 시스템에서는 일정한 보안 임계치를 설정해두고 임계치 내에 포함되는 얼굴을 기존에 등록된 얼굴과 교체하거나 추가적으로 등록하는 업데이트 방식이 사용되고 있다. 그러나 이러한 방식은 부정확한 매칭 결과를 보이거나, 유사한 얼굴에 쉽게 반응할 수 있다. 따라서 우리는 각 얼굴간의 유사도나 인증 대상의 변화를 흡수하며, 잘못된 얼굴 등록을 방지하기 위한 방법으로 학습 성능이 우수한 유전자 알고리즘을 제안하고자 한다. 변화가 심하고 유사한 얼굴영상(한사람 당 10개씩의 변화된 300개의 얼굴 영상)에 대하여 실험을 수행하였고, 얼굴인식기법은 주성분 분석에 기초한 고유얼굴을 이용하였다. 제안된 방식은 기존 얼굴인식 출입통제 시스템에 비해 우성인자의 인식률을 향상뿐만 아니라 유사 얼굴(열성인자)에 반응하는 비율을 감소시키는 효과를 보였다.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제2권2호
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

3차원 전산화단층찰영 영상을 이용한 얼굴 연조직 두께 계측 (Measurement of facial soft tissues thickness using 3D computed tomographic images)

  • 정호걸;김기덕;한승호;신동원;허경석;이제범;박혁;박창서
    • Imaging Science in Dentistry
    • /
    • 제36권1호
    • /
    • pp.49-54
    • /
    • 2006
  • Purpose : To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. Materials and Methods : One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. Results : There were no statistically significant differences between the direct measurements and those using the 3D images (p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. Conclusion : The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissues thickness more easily in forensic science and anthropology.

  • PDF

인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출 (Facial Feature Localization from 3D Face Image using Adjacent Depth Differences)

  • 김익동;심재창
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권5호
    • /
    • pp.617-624
    • /
    • 2004
  • 본 연구에서는 3차원 얼굴 데이타에서 인접 부위의 깊이 차를 이용하여 얼굴의 주요 특징을 추출해 내는 방법을 제안한다. 인간은 사물의 특정 부분의 깊이 정보를 인식하는데 있어서 인접 부위와의 깊이 정보를 비교하고, 이를 바탕으로 깊이 값에 의한 대조가 두드러진 정도에 따라 상대적으로 깊이가 깊고 얕음을 지각하게 된다. 이런 인식 원리를 얼굴의 특징 추출에 적용하여 간단한 연산 과정을 통해 신뢰성 있고, 빠른 얼굴의 특징 추출이 가능하다. 인접 부위의 깊이 차는 수평방향과 수직방향으로 각각 일정 거리를 둔 지점에서의 두 지점간의 깊이 차로 생성된다. 생성된 수평, 수직 방향으로 인접 깊이 차와 입력된 3차원 얼굴 영상을 분석하여 3차원 얼굴 영상에서 가장 주된 특징이 되는 코 영역을 추출하였다.

METHODS OF EYEBROW REGION EXTRACRION AND MOUTH DETECTION FOR FACIAL CARICATURING SYSTEM PICASSO-2 EXHIBITED AT EXPO2005

  • Tokuda, Naoya;Fujiwara, Takayuki;Funahashi, Takuma;Koshimizu, Hiroyasu
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.425-428
    • /
    • 2009
  • We have researched and developed the caricature generation system PICASSO. PICASSO outputs the deformed facial caricature by comparing input face with prepared mean face. We specialized it as PICASSO-2 for exhibiting a robot at Aichi EXPO2005. This robot enforced by PICASSO-2 drew a facial caricature on the shrimp rice cracker with the laser pen. We have been recently exhibiting another revised robot characterized by a brush drawing. This system takes a couple of facial images with CCD camera, extracts the facial features from the images, and generates the facial caricature in real time. We experimentally evaluated the performance of the caricatures using a lot of data taken in Aichi EXPO2005. As a result it was obvious that this system were not sufficient in accuracy of eyebrow region extraction and mouth detection. In this paper, we propose the improved methods for eyebrow region extraction and mouth detection.

  • PDF

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

3차원 얼굴 영상을 이용한 상악 전방견인 치료 후의 연조직 평가 (Soft tissue evaluation using 3-dimensional face image after maxillary protraction therapy)

  • 최동순;이경훈;장인산;차봉근
    • 대한치과의사협회지
    • /
    • 제54권3호
    • /
    • pp.217-229
    • /
    • 2016
  • Purpose: The aim of this study was to evaluate the soft-tissue change after the maxillary protraction therapy using threedimensional (3D) facial images. Materials and Methods: This study used pretreatment (T1) and posttreatment (T2) 3D facial images from thirteen Class III malocclusion patients (6 boys and 7 girls; mean age, $8.9{\pm}2.2years$) who received maxillary protraction therapy. The facial images were taken using the optical scanner (Rexcan III 3D scanner), and T1 and T2 images were superimposed using forehead area as a reference. The soft-tissue changes after the treatment (T2-T1) were three-dimensionally calculated using 15 soft-tissue landmarks and 3 reference planes. Results: Anterior movements of the soft-tissue were observed on the pronasale, subnasale, nasal ala, soft-tissue zygoma, and upper lip area. Posterior movements were observed on the lower lip, soft-tissue B-point, and soft-tissue gnathion area. Vertically, most soft-tissue landmarks moved downward at T2. In transverse direction, bilateral landmarks, i.e. exocanthion, zygomatic point, nasal ala, and cheilion moved more laterally at T2. Conclusion: Facial soft-tissue of Class III malocclusion patients was changed three-dimensionally after maxillary protraction therapy. Especially, the facial profile was improved by forward movement of midface and downward and backward movement of lower face.

  • PDF

개선된 스네이크를 이용한 얼굴 특징요소의 윤곽 추출 (Contour Extraction of Facial Features Based on the Enhanced Snake)

  • 이성수;장종환
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제4권8호
    • /
    • pp.309-314
    • /
    • 2015
  • 얼굴 요소의 윤곽을 추출하는 대표적인 방법 중의 하나는 스네이크다. 스네이크는 간단하고 빠르지만 초기 윤곽 및 개체 형태에 따라 성능이 결정된다. 본 논문에서는 이러한 문제를 해결하기 위해 스네이크 세그먼트의 중간 위치에 스네이크 포인트를 추가하는 방법으로 윤곽을 더 정확하게 추출할 수 있는 개선된 스네이크를 제안한다. 제안한 방법은 6개의 입과 눈 실험 영상에 적용하여 Greedy 스네이크보다 RSD가 2.8%에서 5.8% 정도 감소하였다. 특히 RSD 감소는 대부분 심한 굴곡이 갖는 윤곽 영역에서 얻음으로써 더 정확한 윤곽 추출을 실험을 통해 확인하였다.