• Title/Summary/Keyword: Item response theory

Search Result 99, Processing Time 0.018 seconds

Item Response Analysis on Items Related to Statistical Unit in the National Academic Aptitude Test -Empirical Study for Jellabuk-do Preliminary Testee- (대학수학능력시험의 통계단원 문제에 대한 문항반응분석 - 전북지역 예비 수험생을 대상으로 한 탐색연구 -)

  • Choi, Kyoung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.327-335
    • /
    • 2010
  • Item response theory provides a fixed results about students, regardless of the item difficulty and discrimina-tion and it is also a kind of item analysis methods which provides the same proper competence scores to students in spite of them taking different test repeatedly. In this paper, we researched item difficulty and item discrimina-tion and analyzed items in the national academic aptitude test which were given from 2000 to 2009 in the past 10 years through item response theory, especially, in connection with given items about statistical unit. As a result, we found that about 60 percents of the items were too difficult for high school students to solve, however, item discrimination proved to be great.

Item Analysis using Classical Test Theory and Item Response Theory, Validity and Reliability of the Korean version of a Pressure Ulcer Prevention Knowledge (한국어판 욕창예방지식도구의 고전검사이론과 문항반응이론을 적용한 문항분석, 타당도와 신뢰도)

  • Kang, Myung Ja;Kim, Myoung Soo
    • Journal of Korean Biological Nursing Science
    • /
    • v.20 no.1
    • /
    • pp.11-19
    • /
    • 2018
  • Purpose: The purposes of this study were to perform items analysis using the classical test theory (CTT) and the item response theory (IRT), and to establish the validity and reliability of the Korean version of pressure ulcer prevention knowledge. Methods: The 26-item pressure ulcer prevention knowledge instrument was translated into Korean, and the item analysis of the 22 items having an adequate content validity index (CVI), was conducted. A total of 240 registered nurses in 2 university hospitals completed the questionnaire. Each item was analyzed applying CTT and IRT according to 2-parameter logistic model. Response alternatives quality, item difficulty and item discrimination were evaluated. For testing validity and reliability, Pearson correlation coefficient and Kuder Richardson-20 (KR-20) were used. Results: Scale CVI was .90 (Item-CVI range= .75-1.00). The total correct answer rate for this study population was relatively low as 52.5%. The quality of response alternatives was found to be relatively good (range= .02-.83). The item difficulty of the questions ranged form .10 to .86 according to CTT and -12.19 to 29.92 according to the IRT. This instrument had 12-low, 2-medium and 8-high item difficulty applying IRT. The values for the item discrimination ranged .04-.57 applying CTT and .00-1.47 applying IRT. And overall internal consistency (KR-20) was .62 and stability (test-retest) was .82. Conclusion: The instrument had relatively weak construct validity, item discrimination according to the IRT. Therefore, the cautious usage of a Korean version of this instrument would be recommended for discrimination because there are so many attractive response alternatives and low internal consistency.

A Unifying Model for Hypothesis Testing Using Legislative Voting Data: A Multilevel Item-Response-Theory Model

  • Jeong, Gyung-Ho
    • Analyses & Alternatives
    • /
    • v.5 no.1
    • /
    • pp.3-24
    • /
    • 2021
  • This paper introduces a multilevel item-response-theory (IRT) model as a unifying model for hypothesis testing using legislative voting data. This paper shows that a probit or logit model is a special type of multilevel IRT model. In particular, it is demonstrated that, when a probit or logit model is applied to multiple votes, it makes unrealistic assumptions and produces incorrect coefficient estimates. The advantages of a multilevel IRT model over a probit or logit model are illustrated with a Monte Carlo experiment and an example from the U.S. House. Finally, this paper provides a practical guide to fitting this model to legislative voting data.

  • PDF

Study on the herbology test items in Korean medicine education using Item Response Theory (문항반응이론을 활용한 한의학 교육에서 본초학 시험문항에 대한 연구)

  • Chae, Han;Han, Sang Yun;Yang, GiYoung;Kim, Hyungwoo
    • The Korea Journal of Herbology
    • /
    • v.37 no.2
    • /
    • pp.13-21
    • /
    • 2022
  • Objectives : The evaluation of academic achievement is pivotal for establishing accurate direction and adequate level of medical education. The purpose of this study was to firstly establish innovative item analysis technique of Item Response Theory (IRT) for analyzing multiple-choice test of herbology in the traditional Korean medicine education which has not been available for the difficulty of test theory and statistical calculation. Methods : The answers of 390 students (2012-2018) to the 14 item herbology test in college of Korean medicine were used for the item analysis. As for the multidimensional analysis of item characteristics, difficulty, discrimination, and guessing parameters along with item-total correlation and percentage of correct answer were calculated using Classical Test Theory (CTT) and IRT. Results : The validity parameters of strong and weak items were illustrated in multiple perspectives. There were 4 items with six acceptable index scores, and 5 items with only one acceptable index score. The item discrimination of IRT was found to have no significant correlation with difficulty and discrimination indices of CTT emphasizing attention of professionals of medical education as for the test credibility. Conclusion : The critical suggestions for the development, utilization and revision of test items in the e-learning and evidence-based Teaching era were made based on the results of item analysis using IRT. The current study would firstly provide foundation for upgrading the quality of Korean medicine education using test theory.

A study of the adequate number of questions in a mock test for the paramedic national examination using item response theory (문항반응이론을 적용한 1급 응급구조사 국가시험 대비 모의시험의 적정성 연구)

  • Jung Eun Lee;Jundong Moon;Ajung Kim
    • The Korean Journal of Emergency Medical Services
    • /
    • v.28 no.2
    • /
    • pp.7-19
    • /
    • 2024
  • Purpose: To adjust item numbers in a national test, this study used item response theory to examine changes in average scores, reliability, difficulty, and discrimination according to the adjustment of item numbers. Methods: We analyzed the dichotomous coding of correct and incorrect answers of 473 examinees in a mock test conducted in 2023. Additionally, as an explanatory pilot study, we used an online questionnaire to survey experts on their perceptions of the appropriate item numbers for each test subject from January 18, 2024, to February 15, 2024. Results: Regarding the item numbers on the national exam, experts preferred to reduce the number of management of emergency patients (33.14±6.09, p<.05) and advanced emergency medical care: subtopics (104.49±11.55, p<.05), and the total number of questions (217.82±20.95, p<.05). In a simulation set in which items with low item fit were removed after fitting a two-parameter item response theory model, reliability was maintained at .910 until the 5th test consisting of 185 questions with little loss of difficulty, discrimination, and average score, and there was no correlation between item numbers and average score. Conclusion: Experts responded that reducing the number of items on the national exam was appropriate. As a result of the item reduction simulation, there was no significant loss in the average score, difficulty, discrimination, or reliability. More reliable results could be obtained if the results were based on a validity analysis and analyzed using actual national exams.

Development of Parallel Short Forms of the Convergent Thinking and Problem Solving Inventory Utilizing Item Response Theory : A Case Study of Students in H University (문항반응이론을 적용한 융합적 사고 및 문제해결 역량진단 도구의 병렬 단축형 개발 : H 대학교를 중심으로)

  • You, Hyunjoo;Nam, Na-Ra
    • Journal of Engineering Education Research
    • /
    • v.26 no.3
    • /
    • pp.35-41
    • /
    • 2023
  • The study was conducted to develop two parallel short forms for the Convergent thinking and Problem solving questionnaires which are part of H University's core competency diagnostic tools, based on Multi-Item Response Theory. Item responses of 2,580 students were analyzed using Graded Response Model(GRM) to determine item difficulty and discrimination of each item. The research results are as follows. Two parrallel short tests were developed for the Convergent thinking questionnaire consisting of 12 items which were originally 17 items. Likewise, the Problem solving questionnaire, which originally consisted of 15 questions, was divided into two parallel short forms, each consisting of 9 items. The reliability of the shortened parallel tests was confirmed through internal consistency analysis, and their similarity to the original tests was established through correlation analysis. This study contributed to quality management of competency-based education and programs at H University by developing shortened tests. Based on the results, implications were presented as well as limitations and discussions.

Vocabulary Size of Korean EFL University Learners: Using an Item Response Theory Model

  • Lee, Yongsang;Chon, Yuah V.;Shin, Dongkwang
    • English Language & Literature Teaching
    • /
    • v.18 no.1
    • /
    • pp.171-195
    • /
    • 2012
  • While noticing that there is insufficient interest in the assessment of EFL learners' vocabulary levels or sizes, the researchers developed two tests identical in form (Forms A and B) to assess the lexical knowledge of Korean university learners at the $1^{st}{\sim}10^{th}$ 1,000 word bands by adapting a pre-established vocabulary levels test (VLT). Of equal concern was to investigate if the VLT was equally a valid and reliable instrument to be used on measuring the lexical knowledge of EFL learners. The participants were 804 university freshmen enrolled in a General Education English Course from four different colleges. The learners were asked to respond to either Form A or B. While scores generally fell towards the lower frequency bands, multiple regression found the Korean College Scholastic Ability Test (CSAT) to be a significant variable for predicting the learners' vocabulary sizes. From a methodological perspective, however, noticeable differences between Forms A and B could be found with item response theory analysis. The findings of the study provide suggestions on how future VLT for testing EFL learners may have to be redesigned.

  • PDF

Some Asymptotic Properties of Conditional Covariance in the Item Response Theory

  • Kim, Hae-Rim
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.959-966
    • /
    • 2000
  • A dimensionality assessment procedure DETECT uses the property of being near zero of conditional covariances as an indication of unidimensionality .This study provides the convergent properties to zero of conditional covariances when the dta is unidimensional, with which DETECT extends its theoretical grounds.

  • PDF

Characteristics of Problem on the Area of Probability and Statistics for the Korean College Scholastic Aptitude Test

  • Lee, Kang-Sup;Kim, Jong-Gyu;Hwang, Dong-Jou
    • Research in Mathematical Education
    • /
    • v.11 no.4
    • /
    • pp.275-283
    • /
    • 2007
  • In this study, we gave 132 high school students fifteen probabilities and nine statistics problems of the Korean College Scholastic Aptitude Test and then analyzed their answer using the classical test theory and the item response theory. Using the classical test theory (the Testian 1.0) we get the item reliability ($0.730 \sim 0.765$), and using the item response theory (the Bayesian 1.0) we get the item difficulty ( $-2.32\sim0.83$ ) and discrimination ( $0.55\sim 2.71$). From results, we find out what and why students could not understand well.

  • PDF

Development of an Item Selection Method for Test-Construction by using a Relationship Structure among Abilities

  • Kim, Sung-Ho;Jeong, Mi-Sook;Kim, Jung-Ran
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.193-207
    • /
    • 2001
  • When designing a test set, we need to consider constraints on items that are deemed important by item developers or test specialists. The constraints are essentially on the components of the test domain or abilities relevant to a given test set. And so if the test domain could be represented in a more refined form, test construction would be made in a more efficient way. We assume that relationships among task abilities are representable by a causal model and that the item response theory (IRT) is not fully available for them. In such a case we can not apply traditional item selection methods that are based on the IRT. In this paper, we use entropy as an uncertainty measure for making inferences on task abilities and developed an optimal item selection algorithm which reduces most the entropy of task abilities when items are selected from an item pool.

  • PDF