• Title/Summary/Keyword: Facial development

Search Result 402, Processing Time 0.03 seconds

Design and Realization of Stereo Vision Module For 3D Facial Expression Tracking (3차원 얼굴 표정 추적을 위한 스테레오 시각 모듈 설계 및 구현)

  • Lee, Mun-Hee;Kim, Kyong-Sok
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.533-540
    • /
    • 2006
  • In this study we propose to use a facial motion capture technique to track facial motions and expressions effectively by using the stereo vision module, which has two CMOS IMAGE SENSORS. In the proposed tracking algorithm, a center point tracking technique and correlation tracking technique, based on neural networks, were used. Experimental results show that the two tracking techniques using stereo vision motion capture are able to track general face expressions at a 95.6% and 99.6% success rate, for 15 frames and 30 frames, respectively. However, the tracking success rates(82.7%,99.1%) of the center point tracking technique was far higher than those(78.7%,92.7%) of the correlation tracking technique, when lips trembled.

Development of Face Tracking System Using Skin Color and Facial Shape (얼굴의 색상과 모양정보를 이용한 조명 변화에 강인한 얼굴 추적 시스템 구현)

  • Lee, Hyung-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.711-718
    • /
    • 2003
  • In this paper, we propose a robust face tracking algorithm. It is based on Condensation algorithm [7] and uses skin color and facial shape as the observation measure. It is hard to integrate color weight and shape weight. So we propose the method that has two separate trackers which uses skin color and facial shape as the observation measure respectively. One tracker tracks skin colored region and the other tracks facial shape. We used importance sampling technique to limit sampling region of two trackers. For skin-colored region tracker, we propose an adaptive color model to avoid the effect of illumination change. The proposed face tracker performs robustly in clutter background and in the illumination changes.

Development of Emotional Feature Extraction Method based on Advanced AAM (Advanced AAM 기반 정서특징 검출 기법 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.834-839
    • /
    • 2009
  • It is a key element that the problem of emotional feature extraction based on facial image to recognize a human emotion status. In this paper, we propose an Advanced AAM that is improved version of proposed Facial Expression Recognition Systems based on Bayesian Network by using FACS and AAM. This is a study about the most efficient method of optimal facial feature area for human emotion recognition about random user based on generalized HCI system environments. In order to perform such processes, we use a Statistical Shape Analysis at the normalized input image by using Advanced AAM and FACS as a facial expression and emotion status analysis program. And we study about the automatical emotional feature extraction about random user.

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

Trends and Future Directions in Facial Expression Recognition Technology: A Text Mining Analysis Approach (얼굴 표정 인식 기술의 동향과 향후 방향: 텍스트 마이닝 분석을 중심으로)

  • Insu Jeon;Byeongcheon Lee;Subeen Leem;Jihoon Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.748-750
    • /
    • 2023
  • Facial expression recognition technology's rapid growth and development have garnered significant attention in recent years. This technology holds immense potential for various applications, making it crucial to stay up-to-date with the latest trends and advancements. Simultaneously, it is essential to identify and address the challenges that impede the technology's progress. Motivated by these factors, this study aims to understand the latest trends, future directions, and challenges in facial expression recognition technology by utilizing text mining to analyze papers published between 2020 and 2023. Our research focuses on discerning which aspects of these papers provide valuable insights into the field's recent developments and issues. By doing so, we aim to present the information in an accessible and engaging manner for readers, enabling them to understand the current state and future potential of facial expression recognition technology. Ultimately, our study seeks to contribute to the ongoing dialogue and facilitate further advancements in this rapidly evolving field.

Development of an Integrated Quarantine System Using Thermographic Cameras (열화상 카메라를 이용한 통합 방역 시스템 개발)

  • Jung, Bum-Jin;Lee, Jung-Im;Seo, Gwang-Deok;Jeong, Kyung-Ok
    • Journal of the Korea Safety Management & Science
    • /
    • v.24 no.1
    • /
    • pp.31-38
    • /
    • 2022
  • The most common symptoms of COVID-19 are high fever, cough, headache, and fever. These symptoms may vary from person to person, but checking for "fever" is the government's most basic measure. To confirm this, many facilities use thermographic cameras. Since the previously developed thermographic camera measures body temperature one by one, it takes a lot of time to measure body temperature in places where many people enter and exit, such as multi-use facilities. In order to prevent malfunctions and errors and to prevent sensitive personal information collection, this research team attempted to develop a facial recognition thermographic camera. The purpose of this study is to compensate for the shortcomings of existing thermographic cameras with disaster safety IoT integrated solution products and to provide quarantine systems using advanced facial recognition technologies. In addition, the captured image information should be protected as personal sensitive information, and a recent leak to China occurred. In order to prevent another case of personal information leakage, it is urgent to develop a thermographic camera that reflects this part. The thermal imaging camera system based on facial recognition technology developed in this study received two patents and one application as of January 2022. In the COVID-19 infectious disease disaster, 'quarantine' is an essential element that must be done at the preventive stage. Therefore, we hope that this development will be useful in the quarantine management field.

A Study of Esthetic Facial Profile Preference In Korean (한국인의 연조직측모 선호경향에 대한 연구)

  • Choi, Jun-Gyu;Lee, Ki-Soo
    • The korean journal of orthodontics
    • /
    • v.32 no.5 s.94
    • /
    • pp.327-342
    • /
    • 2002
  • Soft tissue profile is a critical area of interest in the development of an orthodontic treatment and diagnosis. The purpose of this study was to determine the facial profile preference of diversified group and to investigate the relationship between most Preferred facial Profile and existing soft tissue reference lines. A survey instrument of constructed facial silhouettes was evaluated by 894 lay person. The silhouettes had varied nose, lips, chin and soft tissue subnasale point. Seven sets of facial type were computer-generated by an orthodontist to represent distinct facial types. The varied facial profiles were graded on the basis of most preferred to least preferred. Every facial profile were measured by soft tissue reference lines(Ricketts E-line, Burstone B-line) to observe the most preferred facial profile. The results as follows: 1. In reliability test, the childhood group showed lower value than other groups, which means that this group has no concern on facial profile preference. 2. It appears that sexual and age difference made no significant difference in selecting the profile 3. An agreement to least preferred facial profile was higher than an agreement to most preferred facial profile. 4. Coefficient of concordance (Kendall W) was higher in the twentieth group. It means that a profile preference of the twentieth is distinct. 5. A lip protrusion (to Ricketts E-line and Burstone B-line) of most preferred facial profile was similar to measurements of previous study that investigate skeletal and soft tissue of esthetic facial profile of young Korean. So these reference lines can be used valuably in clinics. 6. Profile of excessive lip protrusion or retrusion to E-line & B-line was least preferred. 7. Most preferred profile of all respondents group was straight profile. Profile that showing convex profile was not pre(erred and the least preferred profile was concave profile.

A Study on Real-time Graphic Workflow For Achieving The Photorealistic Virtual Influencer

  • Haitao Jiang
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.130-139
    • /
    • 2023
  • With the increasing popularity of computer-generated virtual influencers, the trend is rising especially on social media. Famous virtual influencer characters Lil Miquela and Imma were all created by CGI graphics workflows. The process is typically a linear affair. Iteration is challenging and costly. Development efforts are frequently siloed off from one another. Moreover, it does not provide a real-time interactive experience. In the previous study, a real-time graphic workflow was proposed for the Digital Actor Hologram project while the output graphic quality is less than the results obtained from the CGI graphic workflow. Therefore, a real-time engine graphic workflow for Virtual Influencers is proposed in this paper to facilitate the creation of real-time interactive functions and realistic graphic quality. The real-time graphic workflow is obtained from four processes: Facial Modeling, Facial Texture, Material Shader, and Look-Development. The analysis of performance with real-time graphical workflow for Digital Actor Hologram demonstrates the usefulness of this research result. Our research will be efficient in producing virtual influencers.

Development of Comprehensive Oro-Facial Function Scale (포괄적 구강안면기능척도(Comprehensive Oro-Facial Function Scale; COFFS)의 개발)

  • Son, Yeong Soo;Min, Kyoung Chul;Woo, Hee-Soon
    • Therapeutic Science for Rehabilitation
    • /
    • v.11 no.1
    • /
    • pp.69-85
    • /
    • 2022
  • Objective : This study aimed to develop a Comprehensive Oro-Facial Function Scale (COFFS) that can evaluate oro-facial function in patients with dysphagia. Methods : To verify the item composition and reliability of the COFFS, preliminary items were collected by selecting and analyzing four previous studies, and the Content Validity Ratio (CVR) was derived through a second survey of experts. Cronbach's 𝛼 was calculated for the internal validity of the evaluation items, and the test-retest reliability and inter-rater reliability were calculated using the internal classification coefficients (ICC). Results : The content validity ratio of all items was 0.67; in the case of Cronbach's 𝛼 value for each domain, 0.849 for communication domain, -0.224 for the oro-facial structure and shape, 0.831 for the ability to perform orofacial movements, and 0.946 for mastication and swallowing function. The test-retest reliability was 0.974 and the inter-rater reliability was 0.937, showing high reliability. Conclusion : In this study, the evaluation tool of COFFS was finally selected from 34 items in four areas and developed on a 3-5 point scale according to the evaluation items. In future studies, additional research is needed to prove its validity through correlation with other evaluation tools that measure oro-facial function.