• 제목/요약/키워드: Item response theory

검색결과 95건 처리시간 0.025초

대학수학능력시험의 통계단원 문제에 대한 문항반응분석 - 전북지역 예비 수험생을 대상으로 한 탐색연구 - (Item Response Analysis on Items Related to Statistical Unit in the National Academic Aptitude Test -Empirical Study for Jellabuk-do Preliminary Testee-)

  • 최경호
    • Communications for Statistical Applications and Methods
    • /
    • 제17권3호
    • /
    • pp.327-335
    • /
    • 2010
  • 문항반응이론은 문항의 난이도와 변별도가 검사를 치른 집단에 무관하게 항상 일정하며, 학생들이 매번 다른 검사를 치른다고 해도 자신의 고유한 능력점수를 받도록 하기위한 문항분석방법이다. 본 연구에서는 2000년부터 2009년까지 최근 10년 동안 대학수학능력시험에 출제되었던 통계영역 문제에 대하여 문항반응이론을 통한 분석을 실시하고 문항변별도와 문항난이도 등에 대해서 알아보았다. 그 결과 거의 60%의 문항이 어려운 문항으로 나타났다. 그러나 문항변별도는 비교적 양호한 것으로 판명되었다.

한국어판 욕창예방지식도구의 고전검사이론과 문항반응이론을 적용한 문항분석, 타당도와 신뢰도 (Item Analysis using Classical Test Theory and Item Response Theory, Validity and Reliability of the Korean version of a Pressure Ulcer Prevention Knowledge)

  • 강명자;김명수
    • Journal of Korean Biological Nursing Science
    • /
    • 제20권1호
    • /
    • pp.11-19
    • /
    • 2018
  • Purpose: The purposes of this study were to perform items analysis using the classical test theory (CTT) and the item response theory (IRT), and to establish the validity and reliability of the Korean version of pressure ulcer prevention knowledge. Methods: The 26-item pressure ulcer prevention knowledge instrument was translated into Korean, and the item analysis of the 22 items having an adequate content validity index (CVI), was conducted. A total of 240 registered nurses in 2 university hospitals completed the questionnaire. Each item was analyzed applying CTT and IRT according to 2-parameter logistic model. Response alternatives quality, item difficulty and item discrimination were evaluated. For testing validity and reliability, Pearson correlation coefficient and Kuder Richardson-20 (KR-20) were used. Results: Scale CVI was .90 (Item-CVI range= .75-1.00). The total correct answer rate for this study population was relatively low as 52.5%. The quality of response alternatives was found to be relatively good (range= .02-.83). The item difficulty of the questions ranged form .10 to .86 according to CTT and -12.19 to 29.92 according to the IRT. This instrument had 12-low, 2-medium and 8-high item difficulty applying IRT. The values for the item discrimination ranged .04-.57 applying CTT and .00-1.47 applying IRT. And overall internal consistency (KR-20) was .62 and stability (test-retest) was .82. Conclusion: The instrument had relatively weak construct validity, item discrimination according to the IRT. Therefore, the cautious usage of a Korean version of this instrument would be recommended for discrimination because there are so many attractive response alternatives and low internal consistency.

A Unifying Model for Hypothesis Testing Using Legislative Voting Data: A Multilevel Item-Response-Theory Model

  • Jeong, Gyung-Ho
    • 분석과 대안
    • /
    • 제5권1호
    • /
    • pp.3-24
    • /
    • 2021
  • This paper introduces a multilevel item-response-theory (IRT) model as a unifying model for hypothesis testing using legislative voting data. This paper shows that a probit or logit model is a special type of multilevel IRT model. In particular, it is demonstrated that, when a probit or logit model is applied to multiple votes, it makes unrealistic assumptions and produces incorrect coefficient estimates. The advantages of a multilevel IRT model over a probit or logit model are illustrated with a Monte Carlo experiment and an example from the U.S. House. Finally, this paper provides a practical guide to fitting this model to legislative voting data.

  • PDF

문항반응이론을 활용한 한의학 교육에서 본초학 시험문항에 대한 연구 (Study on the herbology test items in Korean medicine education using Item Response Theory)

  • 채한;한상윤;양기영;김형우
    • 대한본초학회지
    • /
    • 제37권2호
    • /
    • pp.13-21
    • /
    • 2022
  • Objectives : The evaluation of academic achievement is pivotal for establishing accurate direction and adequate level of medical education. The purpose of this study was to firstly establish innovative item analysis technique of Item Response Theory (IRT) for analyzing multiple-choice test of herbology in the traditional Korean medicine education which has not been available for the difficulty of test theory and statistical calculation. Methods : The answers of 390 students (2012-2018) to the 14 item herbology test in college of Korean medicine were used for the item analysis. As for the multidimensional analysis of item characteristics, difficulty, discrimination, and guessing parameters along with item-total correlation and percentage of correct answer were calculated using Classical Test Theory (CTT) and IRT. Results : The validity parameters of strong and weak items were illustrated in multiple perspectives. There were 4 items with six acceptable index scores, and 5 items with only one acceptable index score. The item discrimination of IRT was found to have no significant correlation with difficulty and discrimination indices of CTT emphasizing attention of professionals of medical education as for the test credibility. Conclusion : The critical suggestions for the development, utilization and revision of test items in the e-learning and evidence-based Teaching era were made based on the results of item analysis using IRT. The current study would firstly provide foundation for upgrading the quality of Korean medicine education using test theory.

문항반응이론을 적용한 융합적 사고 및 문제해결 역량진단 도구의 병렬 단축형 개발 : H 대학교를 중심으로 (Development of Parallel Short Forms of the Convergent Thinking and Problem Solving Inventory Utilizing Item Response Theory : A Case Study of Students in H University)

  • 유현주;남나라
    • 공학교육연구
    • /
    • 제26권3호
    • /
    • pp.35-41
    • /
    • 2023
  • The study was conducted to develop two parallel short forms for the Convergent thinking and Problem solving questionnaires which are part of H University's core competency diagnostic tools, based on Multi-Item Response Theory. Item responses of 2,580 students were analyzed using Graded Response Model(GRM) to determine item difficulty and discrimination of each item. The research results are as follows. Two parrallel short tests were developed for the Convergent thinking questionnaire consisting of 12 items which were originally 17 items. Likewise, the Problem solving questionnaire, which originally consisted of 15 questions, was divided into two parallel short forms, each consisting of 9 items. The reliability of the shortened parallel tests was confirmed through internal consistency analysis, and their similarity to the original tests was established through correlation analysis. This study contributed to quality management of competency-based education and programs at H University by developing shortened tests. Based on the results, implications were presented as well as limitations and discussions.

Vocabulary Size of Korean EFL University Learners: Using an Item Response Theory Model

  • Lee, Yongsang;Chon, Yuah V.;Shin, Dongkwang
    • 영어어문교육
    • /
    • 제18권1호
    • /
    • pp.171-195
    • /
    • 2012
  • While noticing that there is insufficient interest in the assessment of EFL learners' vocabulary levels or sizes, the researchers developed two tests identical in form (Forms A and B) to assess the lexical knowledge of Korean university learners at the $1^{st}{\sim}10^{th}$ 1,000 word bands by adapting a pre-established vocabulary levels test (VLT). Of equal concern was to investigate if the VLT was equally a valid and reliable instrument to be used on measuring the lexical knowledge of EFL learners. The participants were 804 university freshmen enrolled in a General Education English Course from four different colleges. The learners were asked to respond to either Form A or B. While scores generally fell towards the lower frequency bands, multiple regression found the Korean College Scholastic Ability Test (CSAT) to be a significant variable for predicting the learners' vocabulary sizes. From a methodological perspective, however, noticeable differences between Forms A and B could be found with item response theory analysis. The findings of the study provide suggestions on how future VLT for testing EFL learners may have to be redesigned.

  • PDF

Some Asymptotic Properties of Conditional Covariance in the Item Response Theory

  • Kim, Hae-Rim
    • Communications for Statistical Applications and Methods
    • /
    • 제7권3호
    • /
    • pp.959-966
    • /
    • 2000
  • A dimensionality assessment procedure DETECT uses the property of being near zero of conditional covariances as an indication of unidimensionality .This study provides the convergent properties to zero of conditional covariances when the dta is unidimensional, with which DETECT extends its theoretical grounds.

  • PDF

Characteristics of Problem on the Area of Probability and Statistics for the Korean College Scholastic Aptitude Test

  • Lee, Kang-Sup;Kim, Jong-Gyu;Hwang, Dong-Jou
    • 한국수학교육학회지시리즈D:수학교육연구
    • /
    • 제11권4호
    • /
    • pp.275-283
    • /
    • 2007
  • In this study, we gave 132 high school students fifteen probabilities and nine statistics problems of the Korean College Scholastic Aptitude Test and then analyzed their answer using the classical test theory and the item response theory. Using the classical test theory (the Testian 1.0) we get the item reliability ($0.730 \sim 0.765$), and using the item response theory (the Bayesian 1.0) we get the item difficulty ( $-2.32\sim0.83$ ) and discrimination ( $0.55\sim 2.71$). From results, we find out what and why students could not understand well.

  • PDF

Development of an Item Selection Method for Test-Construction by using a Relationship Structure among Abilities

  • Kim, Sung-Ho;Jeong, Mi-Sook;Kim, Jung-Ran
    • Communications for Statistical Applications and Methods
    • /
    • 제8권1호
    • /
    • pp.193-207
    • /
    • 2001
  • When designing a test set, we need to consider constraints on items that are deemed important by item developers or test specialists. The constraints are essentially on the components of the test domain or abilities relevant to a given test set. And so if the test domain could be represented in a more refined form, test construction would be made in a more efficient way. We assume that relationships among task abilities are representable by a causal model and that the item response theory (IRT) is not fully available for them. In such a case we can not apply traditional item selection methods that are based on the IRT. In this paper, we use entropy as an uncertainty measure for making inferences on task abilities and developed an optimal item selection algorithm which reduces most the entropy of task abilities when items are selected from an item pool.

  • PDF

문항반응이론에서 피험자 능력 및 문항모수 추정 알고리즘 개발 (Development of Estimation Algorithm of Latent Ability and Item Parameters in IRT)

  • 최항석;차경준;김성훈;박정;박영선
    • Communications for Statistical Applications and Methods
    • /
    • 제15권3호
    • /
    • pp.465-481
    • /
    • 2008
  • 문항반응이론(Item response theory: IRT)에서는 문항이 가지고 있는 특성을 기초로 피험자의 능력을 추정하고 동시에 각 문항별 문항특성곡선(Item characteristics curve: ICC)을 이용하여 문항모수를 추정하게 된다. 그러나 모수추정에 있어서 최대 우도추정의 경우는 초기값과 다른 여러 문제들이 발생할 수 있다. 본 연구에서는 추정 문제 해결방법의 대안으로 점근적 근사화 방법(Asymptotic approximation method: AAM)을 제안한다. 이는 자료의 수가 적거나 국소 변동이 있는 경우에 효과적인 추정방법이라고 할 수 있다. 이에 개발된 'Any Assess' 시스템을 모의실험을 통하여 신뢰성을 검정하였다.