• Title/Summary/Keyword: facial trustworthiness

Search Result 3, Processing Time 0.02 seconds

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

Trends of Artificial Intelligence Product Certification Programs

  • Yejin SHIN;Joon Ho KWAK;KyoungWoo CHO;JaeYoung HWANG;Sung-Min WOO
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.3
    • /
    • pp.1-5
    • /
    • 2023
  • With recent advancements in artificial intelligence (AI) technology, more products based on AI are being launched and used. However, using AI safely requires an awareness of the potential risks it can pose. These concerns must be evaluated by experts and users must be informed of the results. In response to this need, many countries have implemented certification programs for products based on AI. In this study, we analyze several trends and differences in AI product certification programs across several countries and emphasize the importance of such programs in ensuring the safety and trustworthiness of products that include AI. To this end, we examine four international AI product certification programs and suggest methods for improving and promoting these programs. The certification programs target AI products produced for specific purposes such as autonomous intelligence systems and facial recognition technology, or extend a conventional software quality certification based on the ISO/IEC 25000 standard. The results of our analysis show that companies aim to strategically differentiate their products in the market by ensuring the quality and trustworthiness of AI technologies. Additionally, we propose methods to improve and promote the certification programs based on the results. These findings provide new knowledge and insights that contribute to the development of AI-based product certification programs.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.