• 제목/요약/키워드: Facial Features

검색결과 632건 처리시간 0.029초

음영합성 기법을 이용한 실사형 아바타 얼굴 생성 (Realistic Avatar Face Generation Using Shading Mechanism)

  • 박연출
    • 인터넷정보학회논문지
    • /
    • 제5권5호
    • /
    • pp.79-91
    • /
    • 2004
  • 본 논문에서는 음영합성 기법과 얼굴 인식 기술 중 특징추출 기법을 이용한 아바타 얼굴 자동생성 시스템을 제안한다. 제안하는 시스템은 사진으로부터 얼굴의 특징정보를 추출하여 사람의 얼굴과 유사한 아바타 얼굴을 자동으로 생성해 주는 시스템이며, 음영을 사진으로부터 추출하여 이를 각 이목구비 이미지와 합성하여 생성한다. 따라서 실사형에 좀 더 근접한 얼굴을 생성할 수 있다. 본 논문은 새로운 눈동자 추출 기법과 각 이목구비별 특징정보 추출 방법 그리고. 검색시간을 줄이기 위한 분류 방법, 유사도 계산에 의한 이미지 검색방법, 최종적으로 사진으로부터 음영을 추출하여 검색된 이목구비와 합성, 실사형 아바타 얼굴을 생성하는 방법을 제안한다.

  • PDF

LDP를 이용한 지역적 얼굴 특징 표현 방법에 관한 연구 (A study on local facial features using LDP)

  • 조영탁;정웅경;안용학;채옥삼
    • 융합보안논문지
    • /
    • 제14권5호
    • /
    • pp.49-56
    • /
    • 2014
  • 본 논문에서는 기존의 제안된 LDP(Local Directional Pattern)에 기반하여 지역적인 얼굴특징을 표현하는 방법을 제안한다. 제안된 방법은 눈과 입과 같은 얼굴의 영구적인 특징과 표정이 변하면서 발생하는 일시적인 특징을 효과적으로 표현할 수 있도록 얼굴특징별로 크기와 형태를 달리하는 중첩 가능한 블록을 설정하고 이를 바탕으로 얼굴 특징벡터를 구성한다. 제안된 중첩 블록설정 및 특징 표현 방법은 기하학적 특징을 기반으로 하는 접근 방법의 장점을 수용할 뿐만 아니라 각 얼굴특징의 움직임 특성을 이용하여 얼굴검출에 대한 오류를 수용할 수 있고, 블록사이즈의 가변성으로 인한 공간정보를 유지할 수 있어 표본오차를 줄일 수 있는 장점이 있다. 실험결과, 제안된 방법은 기존 방법에 비해 인식률이 향상됨을 확인하였고, 기존 얼굴 특징 벡터보다 길이가 짧기 때문에 연산량 또한 감소하는 것을 확인하였다.

Cystic Salivary Duct Carcinoma Penetrated by Facial Nerve

  • Kim, Yunghoon;Park, Ji-Ung
    • Archives of Plastic Surgery
    • /
    • 제49권4호
    • /
    • pp.523-526
    • /
    • 2022
  • Salivary duct carcinoma is a rare malignant salivary gland tumor that mainly has solid features. When it occurs in the parotid gland, it can invade the facial nerve and cause facial nerve paralysis. However, in our case, the salivary duct carcinoma exhibited cystic features on computed tomographic imaging, and the facial nerve passed through the cyst. Total parotidectomy with level-I to -III dissections was performed and nerve passing through the tumor was sacrificed. The patient received postoperative radiotherapy and was clinically and radiologically followed-up for every 3 months. Recurrence or distant metastasis was not reported. To the best of our knowledge, this is the first case involving a salivary duct carcinoma with cystic features and facial nerve invasion. Here, we report a first case of cystic salivary duct carcinoma of the parotid gland which uncommonly undergo cystic change and penetrated by facial nerve and successfully resected without causing facial nerve injury.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구 (Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback)

  • 고광은;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제16권10호
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템 (Face classification and analysis based on geometrical feature of face)

  • 정광민;김정훈
    • 한국정보통신학회논문지
    • /
    • 제16권7호
    • /
    • pp.1495-1504
    • /
    • 2012
  • 본 논문에서는 얼굴의 기하학적 특징정보를 기반으로 하여 얼굴의 특징자인 눈썹, 눈, 입, 턱선의 분류 및 해석 알고리즘을 제안하였다. 먼저, 얼굴 특징정보의 분류와 해석을 하기위한 전처리 과정으로 얼굴 특징자들의 눈, 코, 입, 눈썹, 턱선을 추출하기위해 얼굴 특징자 추출 알고리즘을 적용하여 얼굴 특징자들을 추출하게 된다. 추출한 얼굴 특징자들의 형태 정보와 모양정보 및 특징자들 간의 거리비율을 검출하여 이를 평가함수화 하고, 3가지의 눈 타입, 9가지의 입 타입, 12가지의 눈썹 타입 그리고 4가지의 턱선 타입의 분류를 하게 된다. 이렇게 분류된 얼굴 특징자들을 이용하여 얼굴을 해석하게 된다. 얼굴해석 알고리즘은 각각의 특징자들에 대한 고유의 특징자들의 내부구간의 화소분포 정보와 기울기 정보를 가지고 있다. 따라서 특징자들 간의 정보를 이용하여 얼굴을 해석할 수 있었다.

Exploring the Feasibility of Neural Networks for Criminal Propensity Detection through Facial Features Analysis

  • Amal Alshahrani;Sumayyah Albarakati;Reyouf Wasil;Hanan Farouquee;Maryam Alobthani;Someah Al-Qarni
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.11-20
    • /
    • 2024
  • While artificial neural networks are adept at identifying patterns, they can struggle to distinguish between actual correlations and false associations between extracted facial features and criminal behavior within the training data. These associations may not indicate causal connections. Socioeconomic factors, ethnicity, or even chance occurrences in the data can influence both facial features and criminal activity. Consequently, the artificial neural network might identify linked features without understanding the underlying cause. This raises concerns about incorrect linkages and potential misclassification of individuals based on features unrelated to criminal tendencies. To address this challenge, we propose a novel region-based training approach for artificial neural networks focused on criminal propensity detection. Instead of solely relying on overall facial recognition, the network would systematically analyze each facial feature in isolation. This fine-grained approach would enable the network to identify which specific features hold the strongest correlations with criminal activity within the training data. By focusing on these key features, the network can be optimized for more accurate and reliable criminal propensity prediction. This study examines the effectiveness of various algorithms for criminal propensity classification. We evaluate YOLO versions YOLOv5 and YOLOv8 alongside VGG-16. Our findings indicate that YOLO achieved the highest accuracy 0.93 in classifying criminal and non-criminal facial features. While these results are promising, we acknowledge the need for further research on bias and misclassification in criminal justice applications

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습 (Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition)

  • 강현우;임길택;원철호
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

이미지 자동배치를 위한 얼굴 방향성 검출 (Detection of Facial Direction for Automatic Image Arrangement)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • 제10권4호
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF