• 제목/요약/키워드: facial trustworthiness

검색결과 3건 처리시간 0.022초

얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이 (Difference in visual attention during the assessment of facial attractiveness and trustworthiness)

  • 성영신;조경진;김도연;김학진
    • 감성과학
    • /
    • 제13권3호
    • /
    • pp.533-540
    • /
    • 2010
  • 본 연구는 사회적 상호작용에서 필수적인 얼굴 인상형성 과정 중 가장 대표적이라고 할 수 있는 매력도와 신뢰성을 평가할 때 시각적 주의에서의 차이가 나타나는지를 관찰하기 위해 고안되었다. 실험 1에서는 참가자들이 얼굴 신뢰성과 매력도를 평가하는 동안 안구추적기(eye-tracker)를 사용하여 사람들의 시선움직임을 측정하고, 이 후 히트맵(heatmap) 분석을 통해 평가과제 간 차이를 관찰하였다. 그 결과, 사람들은 매력도를 평가할 때와 비교하여 신뢰성을 평가할 때, 얼굴의 주요요소라 할 수 있는 눈과 코 부위에서 더 많은 시선응시가 일어나는 것을 관찰하였다. 또한, 실험 2에서는 참가자들이 실험 1과 동일한 얼굴 평가과제를 수행하는 동안 얼굴의 각 요소에 단어를 짧게 제시하였다. 실험종료 후 실시된 회상검사 결과, 신뢰성 평가과제 시 코부위에 제시되었던 단어의 회상률이 매력도 평가과제 시와 비교하여 유의미하게 높은 것을 확인하였다. 본 연구를 통해 얼굴 신뢰성 판단은 시각적인 주의적 자원(attentional resources)의 할당과 관련된 측면에서 얼굴 매력도와는 구분되는 정보처리과정을 거치는 것을 확인할 수 있었다.

  • PDF

Trends of Artificial Intelligence Product Certification Programs

  • Yejin SHIN;Joon Ho KWAK;KyoungWoo CHO;JaeYoung HWANG;Sung-Min WOO
    • 한국인공지능학회지
    • /
    • 제11권3호
    • /
    • pp.1-5
    • /
    • 2023
  • With recent advancements in artificial intelligence (AI) technology, more products based on AI are being launched and used. However, using AI safely requires an awareness of the potential risks it can pose. These concerns must be evaluated by experts and users must be informed of the results. In response to this need, many countries have implemented certification programs for products based on AI. In this study, we analyze several trends and differences in AI product certification programs across several countries and emphasize the importance of such programs in ensuring the safety and trustworthiness of products that include AI. To this end, we examine four international AI product certification programs and suggest methods for improving and promoting these programs. The certification programs target AI products produced for specific purposes such as autonomous intelligence systems and facial recognition technology, or extend a conventional software quality certification based on the ISO/IEC 25000 standard. The results of our analysis show that companies aim to strategically differentiate their products in the market by ensuring the quality and trustworthiness of AI technologies. Additionally, we propose methods to improve and promote the certification programs based on the results. These findings provide new knowledge and insights that contribute to the development of AI-based product certification programs.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • 제20권4호
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.