• Title/Summary/Keyword: test item

Search Result 1,564, Processing Time 0.025 seconds

A Development of the Test for Mathematical Creative Problem Solving Ability

  • Lee, Kang-Sup;Hwang, Dong jou;Seo, Jong-Jin
    • Research in Mathematical Education
    • /
    • v.7 no.3
    • /
    • pp.163-189
    • /
    • 2003
  • The purpose of this study is to develop a test, which can be used in creative problem solving ability in mathematics of the mathematically gifted and the regular students. This test tool is composed of three categories; fluency (number of responses), flexibility (number of different kinds of responses), and originality (degree of uniqueness of responses) which are the factors of the creativity. After applying to 462 middle school students, this test was analyzed into item analysis. As a results of item analysis, it turned out to be meaningful (reliability: 0.80, validity: item 1(1.05), item 2(1.10), item 3(0.85), item 4(0.90), item 5(1.08), item difficulty: item 1(-0.22), item 2(-0.41), item 3(0.23), item 4(0.40), item 5(-0.01), item discriminating power: item 1(0.73), item 2(0.73), item 3(0.67), item 4(0.51), item 5(0.56), over the level of a standard basis. This means that the test tool was useful in the test process of creative problem solving ability in mathematics

  • PDF

A blueprint for designing and developing the listening and the reading test of National English Ability Test (NEAT): Item-types decision-making model (국가영어능력평가시험(NEAT)의 검사지 구성의 원칙과 절차: 문항 유형 확정 모델)

  • Kim, Yong-Myeong
    • English Language & Literature Teaching
    • /
    • v.16 no.4
    • /
    • pp.153-184
    • /
    • 2010
  • On the bases of the 5 principles and the 4 criteria for designing and developing of the listening and the reading test of National English Ability Test (NEAT), this study presents Item-Types Decision-Making Model as a blueprint for designing and constructing the two tests. It sets up the criteria for validating item types, designs a modular type of test specifications, constructs an item-types bank, and specifies a complementary type of test specifications of the two tests. To gather all these threads up, it constructs Item-Types Decision-Making Model which consists of such components as the item-type pool, the validity criteria and the procedures of testing item types, the item-types bank, the modular and the complementary type test specification. Thus, it shows how the Model works in developing and constructing the two level-differentiated listening and reading tests (the 2nd and the 3rd rank) of NEAT. Finally, it discusses some implications and applications of the Model to the two level-differentiated tests (the A and the B type) of 2014 CSAT (College Scholastic Ability Test) systems, National Assessment of Educational Achievement (NAEA), and classroom testing. In conclusion, Item-Types Decision-Making Model functions as a testing template in an item development system and as a matrix in an item-types bank system.

  • PDF

Application of AIG Implemented within CLASS Software for Generating Cognitive Test Item Models

  • SA, Seungyeon;RYOO, Hyun Suk;RYOO, Ji Hoon
    • Educational Technology International
    • /
    • v.23 no.2
    • /
    • pp.157-181
    • /
    • 2022
  • Scale scores for cognitive domains have been used as an important indicator for both academic achievement and clinical diagnosis. For example, in education, Cognitive Abilities Test (CogAT) has been used to measure student's capability in academic learning. In a clinical setting, Cognitive Impairment Screening Test utilizes items measuring cognitive ability as a dementia screening test. We demonstrated a procedure of generating cognitive ability test items similar as in CogAT but the theory associated with the generation is totally different. When creating cognitive test items, we applied automatic item generation (AIG) that reduces errors in predictions of cognitive ability but attains higher reliability. We selected two cognitive ability test items, categorized as a time estimation item for measuring quantitative reasoning and a paper-folding item for measuring visualization. As CogAT has widely used as a cognitive measurement test, developing an AIG-based cognitive test items will greatly contribute to education field. Since CLASS is the only LMS including AIG technology, we used it for the AIG software to construct item models. The purpose of this study is to demonstrate the item generation process using AIG implemented within CLASS, along with proving quantitative and qualitative strengths of AIG. In result, we confirmed that more than 10,000 items could be made by a single item model in the quantitative aspect and the validity of items could be assured by the procedure based on ECD and AE in the qualitative aspect. This reliable item generation process based on item models would be the key of developing accurate cognitive measurement tests.

Development of an Item Selection Method for Test-Construction by using a Relationship Structure among Abilities

  • Kim, Sung-Ho;Jeong, Mi-Sook;Kim, Jung-Ran
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.193-207
    • /
    • 2001
  • When designing a test set, we need to consider constraints on items that are deemed important by item developers or test specialists. The constraints are essentially on the components of the test domain or abilities relevant to a given test set. And so if the test domain could be represented in a more refined form, test construction would be made in a more efficient way. We assume that relationships among task abilities are representable by a causal model and that the item response theory (IRT) is not fully available for them. In such a case we can not apply traditional item selection methods that are based on the IRT. In this paper, we use entropy as an uncertainty measure for making inferences on task abilities and developed an optimal item selection algorithm which reduces most the entropy of task abilities when items are selected from an item pool.

  • PDF

The Effect of Test Anxiety,Intelligence, and Item Arrangement Order on Test Performance in Earth Science (시험불안(試驗不安) 지능(知能) 및 문항배렬(問項配列) 방식(方式)이 지구과학(地球科學) 의험수행(議驗遂行)에 미치는 효과(效果))

  • Kim, Sang-Dal;Yi, Hyang-Sun;Hwang, In-Ho
    • Journal of The Korean Association For Science Education
    • /
    • v.11 no.2
    • /
    • pp.161-178
    • /
    • 1991
  • This study was designed to investigate the effect of test anxiety, intelligence, and item arrangement order on test performance in Earth Science. The main purposes in this study were to investigate (1) (2) (3) (4) on test performance.: (1) the effect of test anxiety components on test performance in Earth Science. (2) the effect of item arrangement order on test performance in Earth Science. (3) the effect of test anxiety This study was designed to investigate the effect of test anxiety, intelligence, and item arrangement order on test performance in Earth Science. The main purposes in this study were to investigate (1) (2) (3) (4) on test performance.: (1) the effect of test anxiety components on test performance in Earth Science. (2) the effect of item arrangement order on test performance in Earth Science. (3) the effect of test anxiety components on test performance in Earth Science according to learner's intelligence levels. (4) test effect of item arrangement order on learner's intelligence. The hypothesis was that there is difference among test achievements scores according to (1) test anxiety-worry levels. (2) item arrangement orders. (3) item arrangement orders on test anxiety-worry levels. (4) test anxiety-worry levels on intelligence levels. (5) test anxiety-emotionality levels. (6) item arrangement orders on test anxiety-emotionality levels. (7) test anxiety-emotionality levels on intelligence levels. (8) item arrangement orders on intelligence levels. The test items selected for this study were derived from the text Science (part 1) first grade of high school. The subjects of this study were 164 of high school first grade boy students in Pusan. They were assigned to one of the three groups, according to test anxiety levels.: (1) upper 25% of total subjects designated to high group (2) middle 50% (3) low group, lower 25% of total subjects And according to LQ. (1) upper 25% of total subjects designated to high group. (2) middle 50%. (3) low group, lower 25% of total subjects Analysis of variance was used in this study for hypothesis examination. The dependent variable was the achievement scores of Earth Science test and independent variables were test anxiety(worry, emotionality) level, LQ. level, item arrangement orders. The principal findings of the present study are as follows: (1)Test achievement score trend decreases as the test anxiety (worry, emotionality) increases although the result is not statistically significant. (2)There is no significant difference among test achievement scores according to item arrangement orders. (3)The higher the LQ. is, the more effective test anxiety. And the LQ. has significant interaction effect with test anxiety. (4)There is significant interaction effect between the LQ. levels and itemqr arrangement orders.

  • PDF

A study on the improvement of the test items in Korean scholastic ability test (English test) (대학수학능력시험(영어시험)의 문항개선에 대한 연구)

  • Jeon, Sung-Ae
    • English Language & Literature Teaching
    • /
    • v.18 no.2
    • /
    • pp.189-211
    • /
    • 2012
  • The purpose of the study was to explore ways to improve the test items on the Korean scholastic ability test. More specifically, the researchers investigated whether use of the target language in test items would make a difference in total scores, discriminatory power, and item difficulty. A total of 288 high school seniors participated in the study. The subjects were divided into the experimental group (N=145) and the control group (N=143). A 25-item test resembling the Korean scholastic ability test was administered to both groups. The experimental group was given items whose questions and alternatives were all presented in English, whereas the control group was given items whose questions and alternatives were presented in Korean only. Statistical analyses revealed that use of English vs. Korean in the questions and alternatives made a significant difference in total scores, item discrimination, and item difficulty level. The findings strongly suggest that use of English is one way to improve the quality of the Korean scholastic ability test by enhancing item discrimination and face validity. Considering that the test in question is a high-stakes exam in Korea, further research on how to improve the Korean scholastic ability test is urgently called for.

  • PDF

Estimating the regression equations for predicting item difficulty of mathematics in the College Scholastic Ability Test (대학수학능력시험 수리 영역 문항 난이도 예측을 위한 회귀모형 추정)

  • Lee, Sang-Ha;Lee, Bong-Ju;Son, Hong-Chan
    • The Mathematical Education
    • /
    • v.46 no.4
    • /
    • pp.407-421
    • /
    • 2007
  • The purpose of this study is to identify the item characteristics that are supposed to affect item difficulty and to estimate the regression equations for predicting item difficulty of mathematics in the College Scholastic Ability Test(CSAT). We selected six variables related to item characteristics based on learning theories: contents, cognitive domain, novelty, item type, number of concepts, and the amount of computation. With data of the CSAT mathematics test administered in 2004-2006, item difficulty was regressed on the six variables, the location of an item, and the item writer's judgment on difficulty. The novelty of an item was found to be a statistically insignificant variable in explaining item difficulty. Four regression equations with different sets of independent variables could explain $70%{\sim}80%$ of the item difficulty variance and were validated as predicting item difficulty of the mock CSAT in 2006.

  • PDF

Item Response Analysis on Items Related to Statistical Unit in the National Academic Aptitude Test -Empirical Study for Jellabuk-do Preliminary Testee- (대학수학능력시험의 통계단원 문제에 대한 문항반응분석 - 전북지역 예비 수험생을 대상으로 한 탐색연구 -)

  • Choi, Kyoung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.327-335
    • /
    • 2010
  • Item response theory provides a fixed results about students, regardless of the item difficulty and discrimina-tion and it is also a kind of item analysis methods which provides the same proper competence scores to students in spite of them taking different test repeatedly. In this paper, we researched item difficulty and item discrimina-tion and analyzed items in the national academic aptitude test which were given from 2000 to 2009 in the past 10 years through item response theory, especially, in connection with given items about statistical unit. As a result, we found that about 60 percents of the items were too difficult for high school students to solve, however, item discrimination proved to be great.

Study on the herbology test items in Korean medicine education using Item Response Theory (문항반응이론을 활용한 한의학 교육에서 본초학 시험문항에 대한 연구)

  • Chae, Han;Han, Sang Yun;Yang, GiYoung;Kim, Hyungwoo
    • The Korea Journal of Herbology
    • /
    • v.37 no.2
    • /
    • pp.13-21
    • /
    • 2022
  • Objectives : The evaluation of academic achievement is pivotal for establishing accurate direction and adequate level of medical education. The purpose of this study was to firstly establish innovative item analysis technique of Item Response Theory (IRT) for analyzing multiple-choice test of herbology in the traditional Korean medicine education which has not been available for the difficulty of test theory and statistical calculation. Methods : The answers of 390 students (2012-2018) to the 14 item herbology test in college of Korean medicine were used for the item analysis. As for the multidimensional analysis of item characteristics, difficulty, discrimination, and guessing parameters along with item-total correlation and percentage of correct answer were calculated using Classical Test Theory (CTT) and IRT. Results : The validity parameters of strong and weak items were illustrated in multiple perspectives. There were 4 items with six acceptable index scores, and 5 items with only one acceptable index score. The item discrimination of IRT was found to have no significant correlation with difficulty and discrimination indices of CTT emphasizing attention of professionals of medical education as for the test credibility. Conclusion : The critical suggestions for the development, utilization and revision of test items in the e-learning and evidence-based Teaching era were made based on the results of item analysis using IRT. The current study would firstly provide foundation for upgrading the quality of Korean medicine education using test theory.

A Preliminary Study for Development of the Aphasia Screening Test (실어증 선별검사 도구개발을 위한 예비연구)

  • Kim, Hyang-Hee;Lee, Hyun-Joung;Kim, Deog-Yong;Heo, Ji-Hoe;Kim, Yong-Wook
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.7-18
    • /
    • 2006
  • An aphasia screening test can serve a main purpose of differentiating aphasics from non-aphasic patients in a quick as well as efficient manner. As a preliminary study for developing a standardized aphasia screening test for Korean patients, we constructed an aphasia screening test constituting items from the Paradise' Korean version-the Western Aphasia Battery(P K-WAB). All test items were analyzed in order to extract items with optimal item discrimination and adequate item difficulty indices. From the results, we were able to select some items from each subtest with optimal results of discriminant function analysis for aphasic and normal control groups. It is expected, thus, that information on the item analysis could be utilized in developing a Korean aphasia screening test.

  • PDF