• Title/Summary/Keyword: 얼굴 이미지 처리

Search Result 151, Processing Time 0.031 seconds

A.I supervision system (인공지능 무인 감독 시스템)

  • Kim, Da-Hee;Kim, Han-Na;Jang, Hwa-Yeong;Park, Hye-Won;Cho, Joong-Hwee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.1043-1046
    • /
    • 2021
  • 인공지능 무인 감독 시스템을 이용하여 코로나 시대에 다수의 인원이 한 공간에서 시험을 볼 수 없는 상황을 극복하고, 전염병의 확산을 피해 언제 어디서든 시험을 볼 수 있는 시대를 도래한다. 미리 학습된 이미지를 바탕으로 얼굴을 판별하고, Motion recognition 기능을 이용하여 얼굴, 동공, 자세 등의 움직임을 인식하여 분석한다. 이처럼 인공지능 시스템을 이용한다면, 실시간 수업 학생 관리, 범죄 예방 등 타 분야에서 다양한 서비스를 실용화할 수 있다.

Recognition of dog's front face using deep learning and machine learning (딥러닝 및 기계학습 활용 반려견 얼굴 정면판별 방법)

  • Kim, Jong-Bok;Jang, Dong-Hwa;Yang, Kayoung;Kwon, Kyeong-Seok;Kim, Jung-Kon;Lee, Joon-Whoan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.1-9
    • /
    • 2020
  • As pet dogs rapidly increase in number, abandoned and lost dogs are also increasing in number. In Korea, animal registration has been in force since 2014, but the registration rate is not high owing to safety and effectiveness issues. Biometrics is attracting attention as an alternative. In order to increase the recognition rate from biometrics, it is necessary to collect biometric images in the same form as much as possible-from the face. This paper proposes a method to determine whether a dog is facing front or not in a real-time video. The proposed method detects the dog's eyes and nose using deep learning, and extracts five types of directional face information through the relative size and position of the detected face. Then, a machine learning classifier determines whether the dog is facing front or not. We used 2,000 dog images for learning, verification, and testing. YOLOv3 and YOLOv4 were used to detect the eyes and nose, and Multi-layer Perceptron (MLP), Random Forest (RF), and the Support Vector Machine (SVM) were used as classifiers. When YOLOv4 and the RF classifier were used with all five types of the proposed face orientation information, the face recognition rate was best, at 95.25%, and we found that real-time processing is possible.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.

Learning Algorithm for Multiple Distribution Data using Haar-like Feature and Decision Tree (다중 분포 학습 모델을 위한 Haar-like Feature와 Decision Tree를 이용한 학습 알고리즘)

  • Kwak, Ju-Hyun;Woen, Il-Young;Lee, Chang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.43-48
    • /
    • 2013
  • Adaboost is widely used for Haar-like feature boosting algorithm in Face Detection. It shows very effective performance on single distribution model. But when detecting front and side face images at same time, Adaboost shows it's limitation on multiple distribution data because it uses linear combination of basic classifier. This paper suggest the HDCT, modified decision tree algorithm for Haar-like features. We still tested the performance of HDCT compared with Adaboost on multiple distributed image recognition.

Face Emotion Recognition using ResNet with Identity-CBAM (Identity-CBAM ResNet 기반 얼굴 감정 식별 모듈)

  • Oh, Gyutea;Kim, Inki;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.559-561
    • /
    • 2022
  • 인공지능 시대에 들어서면서 개인 맞춤형 환경을 제공하기 위하여 사람의 감정을 인식하고 교감하는 기술이 많이 발전되고 있다. 사람의 감정을 인식하는 방법으로는 얼굴, 음성, 신체 동작, 생체 신호 등이 있지만 이 중 가장 직관적이면서도 쉽게 접할 수 있는 것은 표정이다. 따라서, 본 논문에서는 정확도 높은 얼굴 감정 식별을 위해서 Convolution Block Attention Module(CBAM)의 각 Gate와 Residual Block, Skip Connection을 이용한 Identity- CBAM Module을 제안한다. CBAM의 각 Gate와 Residual Block을 이용하여 각각의 표정에 대한 핵심 특징 정보들을 강조하여 Context 한 모델로 변화시켜주는 효과를 가지게 하였으며 Skip-Connection을 이용하여 기울기 소실 및 폭발에 강인하게 해주는 모듈을 제안한다. AI-HUB의 한국인 감정 인식을 위한 복합 영상 데이터 세트를 이용하여 총 6개의 클래스로 구분하였으며, F1-Score, Accuracy 기준으로 Identity-CBAM 모듈을 적용하였을 때 Vanilla ResNet50, ResNet101 대비 F1-Score 0.4~2.7%, Accuracy 0.18~2.03%의 성능 향상을 달성하였다. 또한, Guided Backpropagation과 Guided GradCam을 통해 시각화하였을 때 중요 특징점들을 더 세밀하게 표현하는 것을 확인하였다. 결과적으로 이미지 내 표정 분류 Task에서 Vanilla ResNet50, ResNet101을 사용하는 것보다 Identity-CBAM Module을 함께 사용하는 것이 더 적합함을 입증하였다.

Adaptation to Baby Schema Features and the Perception of Facial Age (인물 얼굴의 나이 판단과 아기도식 속성에 대한 순응의 잔여효과)

  • Yejin Lee;Sung-Ho Kim
    • Science of Emotion and Sensibility
    • /
    • v.25 no.4
    • /
    • pp.157-172
    • /
    • 2022
  • Using the adaptation aftereffect paradigm, this study investigated whether adaptation to baby schema features of the face and body could affect facial age perceptions. In Experiment 1, participants were asked to determine whether the test faces that morphed at a certain ratio of a baby face and an adult face were perceived as 'baby' or 'adult' after being adapted to either a baby or an adult face. The result of Experiment 1 showed that after being adapted to baby faces, test faces were assessed as belonging to an adult more often than when being adapted to adult faces. In the subsequent experiments, participants carried out the same facial age judgment task after being adapted to baby or adult body silhouettes (Experiment 2) or hand images (Experiment 3). The results revealed that age perceptions were biased in the direction of the adaptors (i.e., an assimilative aftereffect) after adaptation to body silhouettes (Experiment 2) but did not change after being adapted to hands (Experiment 3). The present study showed that contrastive aftereffects in the perception of facial age were induced by adaptation to the baby face but failed to determine the cross-category transfer of age adaptation from hands or body silhouettes to faces.

Real-Time Multiple Face Detection Using Active illumination (능동적 조명을 이용한 실시간 복합 얼굴 검출)

  • 한준희;심재창;설증보;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.155-160
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro-reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi-face detector and a feature correlation tracker. The estimated position of the face is used to control a pan-tilt servo mechanism in real-time, that moves the camera to keep the tracked face always centered in the image.

  • PDF

Implementation of a Face Authentication Embedded System Using High-dimensional Local Binary Pattern Descriptor and Joint Bayesian Algorithm (고차원 국부이진패턴과 결합베이시안 알고리즘을 이용한 얼굴인증 임베디드 시스템 구현)

  • Kim, Dongju;Lee, Seungik;Kang, Seog Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1674-1680
    • /
    • 2017
  • In this paper, an embedded system for face authentication, which exploits high-dimensional local binary pattern (LBP) descriptor and joint Bayesian algorithm, is proposed. We also present a feasible embedded system for the proposed algorithm implemented with a Raspberry Pi 3 model B. Computer simulation for performance evaluation of the presented face authentication algorithm is carried out using a face database of 500 persons. The face data of a person consist of 2 images, one for training and the other for test. As performance measures, we exploit score distribution and face authentication time with respect to the dimensions of principal component analysis (PCA). As a result, it is confirmed that an embedded system having a good face authentication performance can be implemented with a relatively low cost under an optimized embedded environment.

Facial Detection using Haar-like Feature and Bezier Curve (Haar-like와 베지어 곡선을 이용한 얼굴 성분 검출)

  • An, Kyeoung-Jun;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.11 no.9
    • /
    • pp.311-318
    • /
    • 2013
  • For face detection techniques, the correctness of detection decreases with different lightings and backgrounds so such requires new methods and techniques. This study has aimed to obtain data for reasoning human emotional information by analyzing the components of the eyes and mouth that are critical in expressing emotions. To do this, existing problems in detecting face are addressed and a detection method that has a high detection rate and fast processing speed good at detecting environmental elements is proposed. This method must detect a specific part (eyes and a mouth) by using Haar-like Feature technique with the application of an integral image. After which, binaries detect elements based on color information, dividing the face zone and skin zone. To generate correct shape, the shape of detected elements is generated by using a bezier curve-a curve generation algorithm. To evaluate the performance of the proposed method, an experiment was conducted by using data in the Face Recognition Homepage. The result showed that Haar-like technique and bezier curve method were able to detect face elements more elaborately.

Development of Tracking Equipment for Real­Time Multiple Face Detection (실시간 복합 얼굴 검출을 위한 추적 장치 개발)

  • 나상동;송선희;나하선;김천석;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1823-1830
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro­reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi­face detector and a feature correlation tracker. The estimated position of the face is used to control a pan­tilt servo mechanism in real­time, that moves the camera to keep the tracked face always centered in the image.