• Title/Summary/Keyword: face landmark

Search Result 48, Processing Time 0.023 seconds

Proposal for AI Video Interview Using Image Data Analysis

  • Park, Jong-Youel;Ko, Chang-Bae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.212-218
    • /
    • 2022
  • In this paper, the necessity of AI video interview arises when conducting an interview for acquisition of excellent talent in a non-face-to-face situation due to similar situations such as Covid-19. As a matter to be supplemented in general AI interviews, it is difficult to evaluate the reliability and qualitative factors. In addition, the AI interview is conducted not in a two-way Q&A, rather in a one-sided Q&A process. This paper intends to fuse the advantages of existing AI interviews and video interviews. When conducting an interview using AI image analysis technology, it supplements subjective information that evaluates interview management and provides quantitative analysis data and HR expert data. In this paper, image-based multi-modal AI image analysis technology, bioanalysis-based HR analysis technology, and web RTC-based P2P image communication technology are applied. The goal of applying this technology is to propose a method in which biological analysis results (gaze, posture, voice, gesture, landmark) and HR information (opinions or features based on user propensity) can be processed on a single screen to select the right person for the hire.

Active Shape Model with Directional Profile (방향성 프로파일을 적용한 능동형태 모델)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.11
    • /
    • pp.1720-1728
    • /
    • 2017
  • Active shape model is widely used in the field of image processing especially on arbitrary meaningful shape extraction from single gray level image. Cootes et. al. showed efficient detection of variable shape from image by using covariance and mean shape from learning. There are two stages of learning and testing. Hahn applied enhanced shape alignment method rather than using Cootes's rotation and scale scheme. Hahn did not modified the profile itself. In this paper, the method using directional one dimensional profile is proposed to enhance Cootes's one dimensional profile and the shape alignment algorithm of Hahn is combined. The performance of the proposed method was superior to Cootes's and Hahn's. Average landmark estimation error for each image was 27.72 pixels and 39.46 for Cootes's and 33.73 for Hahn's each.

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

Estimating Location in Real-world of a Observer for Adaptive Parallax Barrier (적응적 패럴랙스 베리어를 위한 사용자 위치 추적 방법)

  • Kang, Seok-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1492-1499
    • /
    • 2019
  • This paper propose how to track the position of the observer to control the viewing zone using an adaptive parallax barrier. The pose is estimated using a Constrained Local Model based on the shape model and Landmark for robust eye-distance measurement in the face pose. Camera's correlation converts distance and horizontal location to centimeter. The pixel pitch of the adaptive parallax barrier is adjusted according to the position of the observer's eyes, and the barrier is moved to adjust the viewing area. This paper propose a method for tracking the observer in the range of 60cm to 490cm, and measure the error, measurable range, and fps according to the resolution of the camera image. As a result, the observer can be measured within the absolute error range of 3.1642cm on average, and it was able to measure about 278cm at 320×240, about 488cm at 640×480, and about 493cm at 1280×960 depending on the resolution of the image.

Non-contact Input Method based on Face Recognition and Pyautogui Mouse Control (얼굴 인식과 Pyautogui 마우스 제어 기반의 비접촉식 입력 기법)

  • Park, Sung-jin;Shin, Ye-eun;Lee, Byung-joon;Oh, Ha-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1279-1292
    • /
    • 2022
  • This study proposes a non-contact input method based on face recognition and Pyautogui mouse control as a system that can help users who have difficulty using input devices such as conventional mouse due to physical discomfort. This study includes features that help web surfing more conveniently, especially screen zoom, scroll function, and also solves the problem of eye fatigue, which has been suggested as a limitation in existing non-contact input systems. In addition, various set values can be adjusted in consideration of individual physical differences and Internet usage habits. Furthermore, no high-performance CPU or GPU environment is required, and no separate tracker devices or high-performance cameras are required. Through these studies, we intended to contribute to the realization of barrier-free access by increasing the web accessibility of the disabled and the elderly who find it difficult to use web content.

A size analysis in obstructive sleep apnea patients (폐쇄성 수면무호흡 환자의 안면 및 혀의 크기에 대한 연구)

  • Pae, Eung-Kwon;Lowe, Alan A.;Park, Young-Chel
    • The korean journal of orthodontics
    • /
    • v.27 no.6 s.65
    • /
    • pp.865-870
    • /
    • 1997
  • The submental region in patients with Obstructive Sleep Apnea (OSA) is Perceived to be larger than normal. Therefore, neck thickness has become a variable routinely measured during clinical screening of OSA subjects. In general, OSA Patients are believed to have a large tongue and a narrow airway. To test if OSA patients have a larger face and tongue than non-apneics, eighty pairs of upright and supine cephalograms were obtained from four groups of subjects subclassified in accordance with severity. The sum of distances between pairs of landmarks was calculated for each subjects and employed as a pure size variable for the face and tongue. Only tongue size becomes larger in accordance with apnea severity in both body positions (P<.01). Tongue size reflects apnea severity, yet it Provides only a small fraction of the explanation with regard to apnea severity. We conclude that size may be one factor of many which are significantly related to OSA severity.

  • PDF

Anatomic Description of the Infraorbital Soft Tissues by Three-dimensional Scanning System

  • Peralta, Alonso Andres Hormazabal;Choi, You-Jin;Hu, Hyewon;Hu, Kyung-Seok;Kim, Hee-Jin
    • Journal of Korean Dental Science
    • /
    • v.14 no.2
    • /
    • pp.101-109
    • /
    • 2021
  • Purpose: For minimally invasive procedures, three-dimensional (3D) anatomical knowledge of the structures of the face is essential. This study aimed to describe the thickness of the skin and subcutaneous tissue and depths of the facial muscles located in the infraorbital region using a 3D scanner to provide critical clinical anatomical guidelines for improving minimally invasive cosmetic procedures. Materials and Methods: The 3D scanning images of 38 Korean cadavers (22 males and 16 females; age range: 51~94 years at the time of death) were analyzed. Eight facial landmarks (P1~P8) were marked on the cadaveric faces. The images were scanned in three steps-undissected face, hemiface after skinning, and revealing the facial muscles. Student's t-test was used to identify significant differences. Result: The skin and subcutaneous tissue tended to become thicker from the upper to lower and medial to lateral aspects, and the muscles followed the same pattern as that of the most superficial located muscle and the deepest located muscles. No significant sex-related differences were found in the skin at any landmark. However, the muscles tended to be deeper in the female participants. Conclusion: The study data can serve as a basis for creating or enhancing clinical anatomy-based guidelines or improving procedures in the infraorbital region.

Vehicle Start Control System using Facial Recognition Technology (안면인식 기술을 활용한 차량 시동 제어 시스템)

  • Lee, Min-hye;Kang, Sun-kyoung;Shin, Seong-yoon;Lim, Soon-ja
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.425-426
    • /
    • 2021
  • Recently, there have been frequent incidents of talent accidents caused by youth driving without a license. Driving without a license is becoming a hotbed of curiosity and challenge for some young people, and there is a limit to managing smart keys at home to prevent this. Therefore, in this paper, using the facial recognition algorithm, the face of the driver sitting in the driver's seat is compared with the information stored in advance, and the system is designed to control the engine by determining that it is a registered driver. If the registered driver authentication is successful, the matching accuracy and Unlock message are output to the LCD connected to the Raspberry Pi.

  • PDF

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.