Browse > Article
http://dx.doi.org/10.23095/ETI.2022.23.2.157

Application of AIG Implemented within CLASS Software for Generating Cognitive Test Item Models  

SA, Seungyeon (Yonsei University)
RYOO, Hyun Suk (University of Virginia)
RYOO, Ji Hoon (Yonsei University)
Publication Information
Educational Technology International / v.23, no.2, 2022 , pp. 157-181 More about this Journal
Abstract
Scale scores for cognitive domains have been used as an important indicator for both academic achievement and clinical diagnosis. For example, in education, Cognitive Abilities Test (CogAT) has been used to measure student's capability in academic learning. In a clinical setting, Cognitive Impairment Screening Test utilizes items measuring cognitive ability as a dementia screening test. We demonstrated a procedure of generating cognitive ability test items similar as in CogAT but the theory associated with the generation is totally different. When creating cognitive test items, we applied automatic item generation (AIG) that reduces errors in predictions of cognitive ability but attains higher reliability. We selected two cognitive ability test items, categorized as a time estimation item for measuring quantitative reasoning and a paper-folding item for measuring visualization. As CogAT has widely used as a cognitive measurement test, developing an AIG-based cognitive test items will greatly contribute to education field. Since CLASS is the only LMS including AIG technology, we used it for the AIG software to construct item models. The purpose of this study is to demonstrate the item generation process using AIG implemented within CLASS, along with proving quantitative and qualitative strengths of AIG. In result, we confirmed that more than 10,000 items could be made by a single item model in the quantitative aspect and the validity of items could be assured by the procedure based on ECD and AE in the qualitative aspect. This reliable item generation process based on item models would be the key of developing accurate cognitive measurement tests.
Keywords
Automatic item generation; CLASS; Cognitive test; Item model; Validity; CogAT;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Lakin, J. M. (2018). Making the Cut in Gifted Selection: Score Combination Rules and Their Impact on Program Diversity. Gifted Child Quarterly, 62(2), 210-219. https://doi.org/10.1177/0016986217752099   DOI
2 Park, H. J., Ryoo, H. S., Kwon, J., & Ryoo, J. H. (2022). Change of paradigm on LMS for online education: LMS implementing learning analytics and online assessment. The Educational Research for Tomorrow, 35(2), 49-72.
3 Ryoo, J. H., Park, S., Suh, H., Choi, J., & Kwon, J. (2022). Development of a new measure of cognitive ability using automatic item generation and its psychometric properties. SAGE Open, 12(2), 21582440221095016.
4 Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll model of intelligence. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary Intellectual Assessment: Theories, Test, and Issues (pp. 99-144). Guilford Publications.
5 Stanek, K. M., Gunstad, J., Spitznagel, M. B., Waechter, D., Hughes, J. W., Luyster, F., ... & Rosneck, J. (2011). Improvements in cognitive function following cardiac rehabilitation for older adults with cardiovascular disease. International Journal of Neuroscience, 121(2), 86-93.   DOI
6 Taherdoost, H. (2016). Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. How to test the validation of a questionnaire/survey in a research (August 10, 2016).
7 Wesnes, K., & Pincock, C. (2002). Practice effects on cognitive tasks: A major problem?. The Lancet Neurology, 1(8), 473.   DOI
8 Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. Longman.
9 Behrens, J. T., Mislevy, R. J., DiCerbo, K. E., & Levy, R. (2010). An evidence centered design for learning and assessment in the digital world. CRESST Report 778. National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
10 Bloom, B. (1956). Bloom's taxonomy.
11 Bryan, V. M., & Mayer, J. D. (2020). A meta-analysis of the correlations among broad intelligences: Understanding their relations. Intelligence, 81.
12 CLASS [website]. (2022.01.28.). URL: https://class-analytics.com/
13 Drasgow, F., Luecht, R. M., & Bennett, R. (2006). Technology and testing. In Educational measurement (4th ed.). American Council on Education/Praeger Publishers.
14 Embretson, S., & Yang, X. (2006). 23 Automatic item generation and cognitive psychology. Handbook of statistics, 26, 747-768.   DOI
15 Irvine, Sidney H., & Kyllonen, P. C. (Eds.). (2002). Item generation for test development (1st ed). Routledge.
16 Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2013). Essentials of cross-battery assessment (3rd ed.). John Wiley & Sons, Inc.
17 Gierl, M. J., Lai, H., & Tanygin, V. (2021). Advanced methods in automatic item generation (1st ed.). Routledge.
18 Bejar, I. I. (2002). Generative testing: From conception to implementation. In S. H. Irvine & P. C. Kyllonen (Eds.), Item generation for test development (pp.199-217). Erlbaum.
19 Carmines, E. G., & Zeller, R. A. 1979. Reliability and Validity Assessment. SAGE.
20 Gierl, M. J., & Haladyna, T. M. (Eds.). (2012). Automatic item generation: Theory and practice. Routledge.
21 Latex [website]. (2022.02.10.). URL: https://www.latex-project.org/
22 Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3-62.   DOI
23 Riverside Insights [website]. (2022.02.05). URL: https://www.riversideinsights.com/home
24 Mislevy, R. J., Almond, R. G., & Lukas, J. F. (2004). A brief introduction to evidencecentered design. CSE Report 632. US Department of Education.
25 Goldberg, T. E., Harvey, P. D., Wesnes, K. A., Snyder, P. J., & Schneider, L. S. (2015). Practice effects due to serial cognitive assessment: implications for preclinical Alzheimer's disease randomized controlled trials. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, 1(1), 103-111.   DOI
26 Jutten, R. J., Rentz, D. M., Fu, J. F., Mayblyum, D. V., Amariglio, R. E., Buckley, R. F., Properzi, M. J., Maruff, P., Stark, C. E., Yassa, M. A., Johnson, K. A., Sperling, R. A., & Papp, K. V. (2021). Monthly at-home computerized cognitive testing to detect diminished practice effects in preclinical Alzheimer's disease. Frontiers in aging neuroscience, 13.
27 McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1-10.   DOI
28 Osborne, C. (1991). Statistical calibration: A review. International Statistical Review/Revue Internationale de Statistique, 309-336.
29 Thompson, B. (2011). What is the CogAT(Cognitive Abilities Test) and Why use it?, Homeschool Handbook.
30 Schneider, W. J., & McGrew, K. S. (2018). The Cattell-Horn-Carroll theory of cognitive abilities. In D. P. Flanagan & E. M. McDonough (Eds.), Contemporary Intellectual Assessment. Theories, Tests, and Issues (4th ed., pp. 73-163). The Guilford Press.
31 Warnimont, C. (2010). The relationship between students' performance on the cognitive abilities test (COGAT) and the fourth and fifth grade reading and math achievement tests in Ohio (Unpublished Doctoral dissertation). Bowling Green State University, USA.
32 Embretson, S. E. (1998). A cognitive design system approach to generating valid tests: Application to abstract reasoning. Psychological methods, 3(3), 380.   DOI