• 제목/요약/키워드: Facial Feature

검색결과 513건 처리시간 0.026초

스테레오 영상을 이용한 3차원 포즈 추정 (3D Head Pose Estimation Using The Stereo Image)

  • 양욱일;송환종;이용욱;손광훈
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

Intraoperative Neurophysiological Monitoring during Microvascular Decompression Surgery for Hemifacial Spasm

  • Park, Sang-Ku;Joo, Byung-Euk;Park, Kwan
    • Journal of Korean Neurosurgical Society
    • /
    • 제62권4호
    • /
    • pp.367-375
    • /
    • 2019
  • Hemifacial spasm (HFS) is due to the vascular compression of the facial nerve at its root exit zone (REZ). Microvascular decompression (MVD) of the facial nerve near the REZ is an effective treatment for HFS. In MVD for HFS, intraoperative neurophysiological monitoring (INM) has two purposes. The first purpose is to prevent injury to neural structures such as the vestibulocochlear nerve and facial nerve during MVD surgery, which is possible through INM of brainstem auditory evoked potential and facial nerve electromyography (EMG). The second purpose is the unique feature of MVD for HFS, which is to assess and optimize the effectiveness of the vascular decompression. The purpose is achieved mainly through monitoring of abnormal facial nerve EMG that is called as lateral spread response (LSR) and is also partially possible through Z-L response, facial F-wave, and facial motor evoked potentials. Based on the information regarding INM mentioned above, MVD for HFS can be considered as a more safe and effective treatment.

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법 (Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application)

  • 류정탁;양진모;최영숙;박세현
    • 한국산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.57-63
    • /
    • 2015
  • 얼굴 표정인식 기술은 다른 감정인식기술에 비해 비접촉성, 비강제성, 편리성의 특징을 가지고 있다. 비전 기술을 심리로봇에 적용하기 위해서는 표정인식을 하기 전 단계에서 얼굴 영역을 정확하고 빠르게 추출할 수 있어야 한다. 본 논문에서는 성능이 향상된 얼굴영역 검출을 위해서 먼저 영상에서 YCbCr 피부색 색상 정보를 이용하여 배경을 제거하고 상태 기반 방법인 Haar-like Feature 방법을 이용하였다. 입력영상에 대하여 배경을 제거함으로써 처리속도가 향상된, 배경에 강건한 얼굴검출 결과를 얻을 수 있었다.

얼굴의 다중특징을 이용한 인증 시스템 구현 (A study on the implementation of identification system using facial multi-modal)

  • 정택준;문용선
    • 한국정보통신학회논문지
    • /
    • 제6권5호
    • /
    • pp.777-782
    • /
    • 2002
  • 본 연구는 인식의 정확성을 향상시키고, 사용자의 편이성을 고려하여 단일생체 인식 대신에 얼굴의 다중특징을 이용하는 다중생체 인식방법을 제안한다. 얼굴의 특징은 다음과 같은 방법으로 찾는다. 얼굴의 특징은 웨이블렛 다중분해와 주성분 분석방법으로 계산하였고, 입술의 경우는 입술의 경계를 구한후 최소 자승법을 이용한 방정식의 계수를 구하였으며, 얼굴의 요소간 거리 비율에 의한 특징값을 구하여, 역전파 학습 알고리즘으로 분류하여 실험하였다. 실험을 통해 본 방법의 유효성을 확인하였다.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

개인아바타 자동 생성을 위한 얼굴 구성요소의 추출에 관한 연구 (A Study on Face Component Extraction for Automatic Generation of Personal Avatar)

  • 최재영;황승호;양영규;황보택근
    • 인터넷정보학회논문지
    • /
    • 제6권4호
    • /
    • pp.93-102
    • /
    • 2005
  • 최근 네티즌들은 사이버 공간에서 자신의 정체성을 나타내기 위해 가상 캐릭터 '아바타(Avatar)'를 많이 이용하고 있으며, 더 나아가 사용자들은 좀 더 자신과 닮은 아바타를 요구하고 있다. 본 논문은 자동 아바타 생성의 기반기술인 얼굴 영역과 구성요소의 추출에 대한 연구로써 얼굴 구성 요소의 추출은 ACM과 에지의 정보를 이용하였다. 또한 얼굴 영역의 추출은 얼굴 영역의 면적 변화량을 ACM의 외부에너지로 사용하여 저해상도의 사진에서 발생하는 조명과 화질의 열화에 의한 영향을 감소시킬 수 있었다. 본 연구의 결과로 얼굴영역 추출 성공률은 $92{\%}$로 나타났으며, 얼굴 구성 요소의 추출은 $83.4{\%}$의 성공률을 보였다. 본 논문은 향후 자동 아바타 생성 시스템에서 얼굴 영역과 얼굴 구성요소를 정확하게 추출함으로써 패턴 부위별 특징처리가 가능하게 될 것으로 예상된다.

  • PDF

표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크 (A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability)

  • 김진옥
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제11권3호
    • /
    • pp.284-291
    • /
    • 2005
  • 본 연구에서는 학습한 표정 패턴을 기반으로 비디오에서 사람의 얼굴을 검출하고 표정을 분석하여 분류하는 프레임워크를 제안한다. 제안 프레임워크는 얼굴 표정을 인식하는데 있어 공간적 정보 외시간에 따라 변하는 표정의 패턴을 표현하기 위해 표정 특성을 공간적으로 분석한 PCA와 시공간적으로 분석한 Hidden Markov Model(HMM) 기반의 표정 HMM을 이용한다. 표정의 공간적 특징 추출은 시간적 분석 과정과 밀접하게 연관되어 있기 때문에 다양하게 변화하는 표정을 검출하여 추적하고 분류하는데 HMM의 시공간적 접근 방식을 적용하면 효과적이기 때문이다. 제안 인식 프레임워크는 현재의 시각적 관측치와 이전 시각적 결과간의 사후 확률 방법에 의해 완성된다. 결과적으로 제안 프레임워크는 대표적인 6개 표정뿐만 아니라 표정의 정도가 약한 프레임에 대해서도 정확하고 강건한 표정 인식 결과를 보인다. 제안 프레임 워크를 이용하면 표정 인식, HCI, 키프레임 추출과 같은 응용 분야 구현에 효과적이다

인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석 (Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX)

  • 전호범;고현관;이선경;송복득;김채규;권기룡
    • 한국멀티미디어학회논문지
    • /
    • 제22권5호
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Exploring the Feasibility of Neural Networks for Criminal Propensity Detection through Facial Features Analysis

  • Amal Alshahrani;Sumayyah Albarakati;Reyouf Wasil;Hanan Farouquee;Maryam Alobthani;Someah Al-Qarni
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.11-20
    • /
    • 2024
  • While artificial neural networks are adept at identifying patterns, they can struggle to distinguish between actual correlations and false associations between extracted facial features and criminal behavior within the training data. These associations may not indicate causal connections. Socioeconomic factors, ethnicity, or even chance occurrences in the data can influence both facial features and criminal activity. Consequently, the artificial neural network might identify linked features without understanding the underlying cause. This raises concerns about incorrect linkages and potential misclassification of individuals based on features unrelated to criminal tendencies. To address this challenge, we propose a novel region-based training approach for artificial neural networks focused on criminal propensity detection. Instead of solely relying on overall facial recognition, the network would systematically analyze each facial feature in isolation. This fine-grained approach would enable the network to identify which specific features hold the strongest correlations with criminal activity within the training data. By focusing on these key features, the network can be optimized for more accurate and reliable criminal propensity prediction. This study examines the effectiveness of various algorithms for criminal propensity classification. We evaluate YOLO versions YOLOv5 and YOLOv8 alongside VGG-16. Our findings indicate that YOLO achieved the highest accuracy 0.93 in classifying criminal and non-criminal facial features. While these results are promising, we acknowledge the need for further research on bias and misclassification in criminal justice applications