• Title/Summary/Keyword: Reliability of artificial intelligence

Search Result 177, Processing Time 0.027 seconds

The Impact of Artificial Intelligence Adoption in Candidates Screening and Job Interview on Intentions to Apply (채용 전형에서 인공지능 기술 도입이 입사 지원의도에 미치는 영향)

  • Lee, Hwanwoo;Lee, Saerom;Jung, Kyoung Chol
    • The Journal of Information Systems
    • /
    • v.28 no.2
    • /
    • pp.25-52
    • /
    • 2019
  • Purpose Despite the recent increase in the use of selection tools using artificial intelligence (AI), far less is known about the effectiveness of them in recruitment and selection research. Design/methodology/approach This paper tests the impact of AI-based initial screening and interview on intentions to apply. We also examine the moderating role of individual difference (i.e., reliability on technology) in the relationship. Findings Using policy-capturing with undergraduate students at a large university in South Korea, this study showed that AI-based interview has a negative effect on intentions to apply, where AI-based initial screening has no effect. These results suggest that applicants may have a negative feeling of AI-based interview, but they may not AI-based initial screening. In other words, AI-based interview can reduce application rates, but AI-based screening not. Results also indicated that the relationship between AI-based initial screening and intentions to apply is moderated by the level of applicant's reliability on technology. Specifically, respondents with high levels of reliability are more likely than those with low levels of reliability to apply for firms using AI-based initial screening. However, the moderating role of reliability was not significant in the relationship between the AI interview and the applying intention. Employing uncertainty reduction theory, this study indicated that the relationship between AI-based selection tools and intentions to apply is dynamic, suggesting that organizations should carefully manage their AI-based selection techniques throughout the recruitment and selection process.

A Study on Policy Acceptance Intention to Use Artificial Intelligence-Based Public Services: Focusing on the Influence of Individual Perception & Digital Literacy Level (인공지능 기반 공공서비스 정책수용 의도에 관한 연구: 개인의 인식과 디지털 리터러시 수준이 미치는 영향을 중심으로)

  • Jang, Changki;Sung, WookJoon
    • Informatization Policy
    • /
    • v.29 no.1
    • /
    • pp.60-83
    • /
    • 2022
  • The purpose of this study is to empirically analyze the effect of individual perception of artificial intelligence and the level of digital literacy on the acceptance of artificial intelligence-based public services. For empirical analysis, a research model was set up based on the technology acceptance model and planned behavior theory using survey data of 2017 and analyzed through structural equations. To summarize the results of the analysis, firstly, the positive perception of individuals about artificial intelligence technology plays a role in reinforcing attitudes toward benefits and reducing concerns about public service in which artificial intelligence technology has been introduced. Secondly, the level of digital literacy reinforces both benefits and concerns about artificial intelligence technology, but it was found that the intention to use public services was reinforced through the benefits of artificial intelligence technology perceived by individuals, rather than privacy concerns about artificial intelligence technology. Thirdly, it was confirmed that the perceived benefits of individuals on artificial intelligence technology reinforced the intention to use public civil services, and privacy concerns negatively influenced the intention to use. It was confirmed that the influence of a perceived ease of use and usefulness, as opposed to privacy concerns, further reinforces the intention to use. Both citizens' positive perceptions regarding the accuracy and reliability of information provided through artificial intelligence technology and institutional complementation of responsibility for errors caused by artificial intelligence technology are strengthened, and technical problems related to privacy protection are solved.

Concurrent Validity and Test-retest Reliability of the Core Stability Test Using Ultrasound Imaging and Electromyography Measurements

  • Yoo, Seungju;Lee, Nam-Gi;Park, Chanhee;You, Joshua (Sung) Hyun
    • Physical Therapy Korea
    • /
    • v.28 no.3
    • /
    • pp.186-193
    • /
    • 2021
  • Background: While the formal test has been used to provide a quantitative measurement of core stability, studies have reported inconsistent results regarding its test-retest and intraobserver reliabilities. Furthermore, the validity of the formal test has never been established. Objects: This study aimed to establish the concurrent validity and test-retest reliability of the formal test. Methods: Twenty-two young adults with and without core instability (23.1 ± 2.0 years) were recruited. Concurrent validity was determined by comparing the muscle thickness changes of the external oblique, internal oblique, and transverse abdominal muscle to changes in core stability pressure during the formal test using ultrasound (US) imaging and pressure biofeedback, respectively. For the test-retest reliability, muscle thickness and pressure changes were repeatedly measured approximately 24 hours apart. Electromyography (EMG) was used to monitor trunk muscle activity during the formal test. Results: The Pearson's correlation analysis showed an excellent correlation between transverse abdominal thickness and pressure biofeedback unit (PBU) pressure as well as internal oblique thickness and PBU pressure, ranging from r = 0.856-0.980, p < 0.05. The test-retest reliability was good, intraclass correlation coefficient (ICC1,2) = 0.876 for the core stability pressure measure and ICC1,2 = 0.939 to 0.989 for the abdominal muscle thickness measure. Conclusion: Our results provide clinical evidence that the formal test is valid and reliable, when concurrently incorporated into EMG and US measurements.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

The validity and reliability of the Korean version of the General Attitudes towards Artificial Intelligence Scale for nursing students (한국어판 간호대학생의 인공지능에 대한 태도 측정도구 신뢰도 및 타당도 검증)

  • Seo, Yon Hee;Ahn, Jung-Won
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.28 no.4
    • /
    • pp.357-367
    • /
    • 2022
  • Purpose: The aim of the study was to verify the validity and reliability of the Korean version of the General Attitudes towards Artificial Intelligence Scale (GAAIS-K) for nursing students. Methods: Data from 235 participants were collected from April 12 to April 26, 2022. A total of 230 participants' data were analyzed. The data were analyzed for content, discriminant, known-groups, and construct validity using content validity index, correlation coefficient, and confirmatory factor analyses. The reliability of the GAAIS-K was examined using internal consistency and test-retest analyses. Results: The expert-rated content validity index was ≥.80. The sub-scales of the GAAIS-K were moderately correlated with attitude toward accepting technology, indicative of its discriminant validity. The male students' positive attitude score was significantly higher than that of the female students, satisfying the known-groups validity. Cronbach's α for the scale was .86 (positive) and .74 (negative), and the intra-class correlation coefficient for the two-week test-retest reliability was .86 (positive) and .60 (negative). The scores for positive and negative attitudes were 3.68±0.46 and 3.05±0.55. Conclusion: This study shows that the GAAIS-K is a valid and reliable instrument for assessing nursing students. Additional research is recommended to continue the evaluation of the GAAIS-K with a focus on healthcare settings.

The Expectation of Medical Artificial Intelligence of Students Majoring in Health in Convergence Era (융복합 시대에 일부 보건계열 전공 학생들의 의료용 인공지능에 대한 기대도)

  • Moon, Ja-Young;Sim, Seon-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2018
  • The purpose of this study was to investigate the expectation toward medical artificial intelligence(AI) of students in majoring health, and to utilize it as a basic data for widespread use of medical AI for 500 students majoring in health science at Cheonan city. The awareness of AI was 18.6%, the reliability of AI was 24.8%, and agreement to use of medical AI was 38%. Also, the higher the awareness and reliability of AI were, the higher the expectation of AI was. As a result, education on medical AI in the major field should be a cornerstone for the development of an effective healthcare environment utilizing medical AI by raising awareness, reliability and expectation of AI.

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

How to Review a Paper Written by Artificial Intelligence (인공지능으로 작성된 논문의 처리 방안)

  • Dong Woo Shin;Sung-Hoon Moon
    • Journal of Digestive Cancer Research
    • /
    • v.12 no.1
    • /
    • pp.38-43
    • /
    • 2024
  • Artificial Intelligence (AI) is the intelligence of machines or software, in contrast to human intelligence. Generative AI technologies, such as ChatGPT, have emerged as valuable research tools that facilitate brainstorming ideas for research, analyzing data, and writing papers. However, their application has raised concerns regarding authorship, copyright, and ethical considerations. Many organizations of medical journal editors, including the International Committee of Medical Journal Editors and the World Association of Medical Editors, do not recognize AI technology as an author. Instead, they recommend that researchers explicitly acknowledge the use of AI tools in their research methods or acknowledgments. Similarly, international journals do not recognize AI tools as authors and insist that human authors should be accountable for the research findings. Therefore, when integrating AI-generated content into papers, it should be disclosed under the responsibility of human authors, and the details of the AI tools employed should be specified to ensure transparency and reliability.