• Title/Summary/Keyword: 얼굴 탐지

Search Result 81, Processing Time 0.042 seconds

Affine Invariant Local Descriptors for Face Recognition (얼굴인식을 위한 어파인 불변 지역 서술자)

  • Gao, Yongbin;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.375-380
    • /
    • 2014
  • Under controlled environment, such as fixed viewpoints or consistent illumination, the performance of face recognition is usually high enough to be acceptable nowadays. Face recognition is, however, a still challenging task in real world. SIFT(Scale Invariant Feature Transformation) algorithm is scale and rotation invariant, which is powerful only in the case of small viewpoint changes. However, it often fails when viewpoint of faces changes in wide range. In this paper, we use Affine SIFT (Scale Invariant Feature Transformation; ASIFT) to detect affine invariant local descriptors for face recognition under wide viewpoint changes. The ASIFT is an extension of SIFT algorithm to solve this weakness. In our scheme, ASIFT is applied only to gallery face, while SIFT algorithm is applied to probe face. ASIFT generates a series of different viewpoints using affine transformation. Therefore, the ASIFT allows viewpoint differences between gallery face and probe face. Experiment results showed our framework achieved higher recognition accuracy than the original SIFT algorithm on FERET database.

Expression Analysis System of Game Player based on Multi-modal Interface (멀티 모달 인터페이스 기반 플레이어 얼굴 표정 분석 시스템 개발)

  • Jung, Jang-Young;Kim, Young-Bin;Lee, Sang-Hyeok;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.7-16
    • /
    • 2016
  • In this paper, we propose a method for effectively detecting specific behavior. The proposed method detects outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment and add keystroke based on repeated pattern. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players was used to analyze high-dimensional game-player data for a detection effect of repeated behaviour pattern. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. In addition, Repeated behaviour pattern can be analysed possible. The proposed method can also be used for feedback and quantification about analysis of various interactive content provided in PC environments.

Face Detection through Implementation of adaptive Saliency map (적응적인 Saliency map 모델 구현을 통한 얼굴 검출)

  • Kim, Gi-Jung;Han, Yeong-Jun;Han, Hyeon-Su
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.153-156
    • /
    • 2007
  • 인간의 시각 시스템은 선택적 주의 집중에 의해 시각 수용체로 도달되는 많은 물체들 중에서 필요한 정보만을 추출하여 원하는 작업을 수행한다. Itti와 Koch는 시각적 주의를 제어할 수 있는, 신경계를 모방한 계산적 모델을 제안하였으나 조명환경에 고정적인 saliency map을 구성하였다. 따라서, 본 논문에서는 영상에서 ROI(region of interest)을 탐지하기 위한 조명환경에 적응적인 saliency map 모델을 구성하는 기법을 제시한다. 변화하는 환경에서 원하는 특징을 부각시키기 위하여 상황에 적응적인 동적 가중치를 부여한다. 동적 가중치는 conspicuity map에 S.K. Chang이 제안한 PIM(Picture Information Measure)을 적용시켜 정보량을 측정한 후, 이에 따라 정규화된 값을 부여함으로써 구현한다. 제안하는 조명환경에 강인한 적응적인 saliency map 모델 구현의 성능을 얼굴검출 실험을 통하여 검증하였다.

  • PDF

Real-time specific object mosaic processing system (실시간 특정 객체 모자이크 처리 시스템)

  • Park, Seong-Hyeon;Ku, Chang-Mo;Park, Gun-Woo;Park, Nam-Seok;Cho, Jung-hwi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.928-930
    • /
    • 2019
  • 방송에서는 당사자의 동의 없이 얼굴을 노출 시키거나, 유해물질로 판단되는 물체의 노출을 금지하고 있다. 기존의 처리방식으로 편집자가 촬영된 영상을 직접 편집하거나, 촬영 시 가리개를 이용하는 방법을 사용한다. 이러한 방법은 번거롭고, 실수로 인해 얼굴이나 유해물질이 방송에 그대로 노출될 수 있다. 본 논문에서는 딥러닝 기반의 객체탐지 모델과 동일인 판단 모델을 사용하여 편집 과정을 자동으로 처리하고 후처리뿐만 아니라 실시간 방송에서의 적용을 위해 추가적으로 객체추적 알고리즘을 도입하여 처리속도를 높이는 방안을 제시한다.

Implementation of Real Time Facial Expression and Speech Emotion Analyzer based on Haar Cascade and DNN (Haar Cascade와 DNN 기반의 실시간 얼굴 표정 및 음성 감정 분석기 구현)

  • Yu, Chan-Young;Seo, Duck-Kyu;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.33-36
    • /
    • 2021
  • 본 논문에서는 인간의 표정과 목소리를 기반으로 한 감정 분석기를 제안한다. 제안하는 분석기들은 수많은 인간의 표정 중 뚜렷한 특징을 가진 표정 7가지를 별도의 클래스로 구성하며, DNN 모델을 수정하여 사용하였다. 또한, 음성 데이터는 학습 데이터 증식을 위한 Data Augmentation을 하였으며, 학습 도중 과적합을 방지하기 위해 콜백 함수를 사용하여 가장 최적의 성능에 도달했을 때, Early-stop 되도록 설정했다. 제안하는 표정 감정 분석 모델의 학습 결과는 val loss값이 0.94, val accuracy 값은 0.66이고, 음성 감정 분석 모델의 학습 결과는 val loss 결과값이 0.89, val accuracy 값은 0.65로, OpenCV 라이브러리를 사용한 모델 테스트는 안정적인 결과를 도출하였다.

  • PDF

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Face Mask Detection using Neural Network in Real Time Video Surveillance (실시간 영상 기반 신경망을 이용한 마스크 착용 감지 시스템)

  • Go, Geon-Hyeok;Choe, Seong-Jin;Song, Do-Hun;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.208-211
    • /
    • 2021
  • 본 논문에서는 합성곱 신경망을 활용하여 영상에서 마스크 착용 및 미착용 상태를 탐지하는 방법을 제안한다. 코로나바이러스감염증-19(COVID-19)의 유행에 따라 감염 및 확산방지를 위해 마스크 정상적 착용이 요구되는데 몇몇 사람들은 이를 지키지 않고 있으며 현재의 감시 시스템은 입구에서 마스크 착용 여부를 검사하는 방식으로 작동될 뿐 공간에 입장한 다음 착용 여부를 알 수 없다. 제안하는 방법은 합성곱 신경망을 통해 영상에서 얼굴을 탐지하여 얻은 데이터를 이용하여 다수사람들의 마스크 착용 및 미착용 상태를 판별하는 방법으로 설계하였다.

  • PDF

A time reduction method For the Face Recognition CCTV (CCTV에서 얼굴 탐색 시간 단축 기법)

  • Pak, Seon-Woo;Hong, Ji-Houn;Won, Ill-Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1534-1536
    • /
    • 2015
  • 동영상에서 학습을 통한 특정인의 탐지는 한 개의 프레임 당 많은 시간이 소모된다. 따라서 특정인의 위치를 파악하면서도 탐색할 프레임의 수를 줄이는 것이 필요하다. 우리는 관찰되는 장소의 특징을 이용하여 일정 범위를 생략하는 탐색 기법을 제안하고 성능을 측정하였다. 실험결과 제안한 방법에서 어느 정도 유의미한 결과를 얻을 수 있었다.

Research and Optimization of Face Detection Algorithm Based on MTCNN Model in Complex Environment (복잡한 환경에서 MTCNN 모델 기반 얼굴 검출 알고리즘 개선 연구)

  • Fu, Yumei;Kim, Minyoung;Jang, Jong-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.50-56
    • /
    • 2020
  • With the rapid development of deep neural network theory and application research, the effect of face detection has been improved. However, due to the complexity of deep neural network calculation and the high complexity of the detection environment, how to detect face quickly and accurately becomes the main problem. This paper is based on the relatively simple model of the MTCNN model, using FDDB (Face Detection Dataset and Benchmark Homepage), LFW (Field Label Face) and FaceScrub public datasets as training samples. At the same time of sorting out and introducing MTCNN(Multi-Task Cascaded Convolutional Neural Network) model, it explores how to improve training speed and Increase performance at the same time. In this paper, the dynamic image pyramid technology is used to replace the traditional image pyramid technology to segment samples, and OHEM (the online hard example mine) function in MTCNN model is deleted in training, so as to improve the training speed.

The Uncanny Valley Effect for Celebrity Faces and Celebrity-based Avatars (연예인 얼굴과 연예인 기반 아바타에서의 언캐니 밸리)

  • Jung, Na-ri;Lee, Min-ji;Choi, Hoon
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.91-102
    • /
    • 2022
  • As virtual space activities become more common, human-virtual agents such as avatars are more frequently used instead of people, but the uncanny valley effect, in which people feel uncomfortable when they see artifacts that look similar to humans, is an obstacle. In this study, we explored the uncanny valley effect for celebrity avatars. We manipulated the degree of atypicality by adjusting the eye size in photos of celebrities, ordinary people, and their avatars and measured the intensity of the uncanny valley effect. As a result, the uncanny valley effect for celebrities and celebrity avatars appeared to be stronger than the effect for ordinary people. This result is consistent with previous findings that more robust facial representations are formed for familiar faces, making it easier to detect facial changes. However, with real faces of celebrities and ordinary people, as in previous studies, the higher the degree of atypicality, the greater the uncanny valley effect, but this result was not found for the avatar stimulus. This high degree of tolerance for atypicality in avatars seems to be caused by cartoon characters' tendency to have exaggerated facial features such as eyes, nose, and mouth. These results suggest that efforts to reduce the uncanny valley in the virtual space service using celebrity avatars are necessary.