• Title/Summary/Keyword: test item

Search Result 1,568, Processing Time 0.027 seconds

Item Analysis of the 'Basic course of Information Technology' - Vocational Education Section in the College Scholastic Ability Test- ('정보 기술 기초' 교과의 문항 분석 - 대학수학능력시험 직업탐구영역을 중심으로-)

  • Kim, Jong-Hye;Kim, Ji-Hyun;kim, Yong;Lee, Won-Gyu
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.4
    • /
    • pp.39-49
    • /
    • 2007
  • The purpose of this study is to provide analysis resources to develop high standard questions by analyzing item characteristics and item usability of 'Basic course of Information Technology' in the College Scholastic Ability Test, For the qualitative research, this paper analyzed content validity. For the quantitative research, this paper analyzed item difficulty, item discrimination, item reliability, and distracters. As a result of analyzing tests in 2005 and 2006, questions were equally extracted from educational contents. However, the standard of questions were in need of revision. The development of high quality contents in Vocational Education Section was needed in order to meet to the College Scholastic Ability Test standards. Therefore, it is required to develop various difficulties and acceptable distinguishable questions.

  • PDF

A Study of Variables Related to Item Difficulty in College Scholastic Ability Test (대학수학능력시험 난이도 관련 변인 탐색)

  • 박문환
    • Journal of Educational Research in Mathematics
    • /
    • v.14 no.1
    • /
    • pp.71-88
    • /
    • 2004
  • The purpose of this study was to examine particular variables that play a significant role in the difficulty of math test items in College Scholastic Ability Test (CSAT). The study also aimed to develop a model of measuring the item difficulty. Variables correlated to item difficulty were drawn from the review of the related literature and the analysis of the content and difficulty of the past test items of CSAT. The first instrument was designed by using the correlated variables. According to the results of correlation analysis, the second instrument was made by deleting the variables which showed relatively low correlation with item difficulty and by refining some variables. Several models were proposed by using the revised instrument. The comparison of the R square and cross validity of each model reveals that integrated regression model was the most stable and accurate among the proposed models. The study also showed that statistically significant predictors were choice format, content domain, behavior domain, and the degree of item familiarity in the order of proportion of variance accounted by the predictors. Despite the limited scope of the present research, it can be suggested that its findings provide useful insights into predicting math test item difficulty.

  • PDF

Analysis of the Characteristics of Multiple-Choice Test Items Used in Integrated Science Assessment: Focused on the Case of Four High School (융합형 '과학' 평가에 사용된 선다형 문항의 특성 분석 : 4개 고등학교의 사례)

  • Lee, Ki-Young;Cho, Hee-Hyung;Kwon, Suk-Min;Kim, Hee-Kyong;Yoon, Heesook
    • Journal of Science Education
    • /
    • v.37 no.2
    • /
    • pp.278-293
    • /
    • 2013
  • The purpose of this study was to analyze the characteristics of multiple-choice test items used in assessment of high school integrated science according to 2009 revised curriculum. For the analysis of the tendency of item setting, we devised an analytic framework specific to integrated science, and analyzed the characteristics of items by applying the devised framework and item response theory. The results of the tendency of item setting revealed that most of items run counter to the intent of integrated science in terms of item resource, integration extent, and cognitive level, which means teachers are stick to separative method in teaching-learning and assessment of integrated science. The results of the analysis applying item response theory showed that item difficulty was appropriate and item discrimination was considerably high. However, there was no relevance between the tendency of item setting and qualitative characteristics of the items. We also discussed some agendas to improve the teaching-learning and assessment of integrated science based on the results of this study.

  • PDF

A development of the test of creativity level for science field (과학 창의성 검사지 개발)

  • Kim, Hee-Soo;Kim, Jong-Heon;Yuk, Geun-Cheol;Lee, Hui-Gwon;Kim, Jeong-Min;Lee, Bong-Jae
    • Journal of Gifted/Talented Education
    • /
    • v.12 no.4
    • /
    • pp.26-44
    • /
    • 2002
  • We have developed a tool and solution to test creativity level for science field. This test tool was considered 7 creativity elements. In development process, it was verified for contents validity, clarity of the item etc. The test developed in this study was analyzed item analysis after applying for 332 middle school students. As a results of item analysis, it showed meaningful(validity: 92%, item difficulty: $42%{\sim}73%$, reliability: 0.84, item discriminating power: $0.22{\sim}0.70$)over the level of a standard basis. This means that the test tool was useful in the test process of creativity level for science.

Characteristics of Problem on the Area of Probability and Statistics for the Korean College Scholastic Aptitude Test

  • Lee, Kang-Sup;Kim, Jong-Gyu;Hwang, Dong-Jou
    • Research in Mathematical Education
    • /
    • v.11 no.4
    • /
    • pp.275-283
    • /
    • 2007
  • In this study, we gave 132 high school students fifteen probabilities and nine statistics problems of the Korean College Scholastic Aptitude Test and then analyzed their answer using the classical test theory and the item response theory. Using the classical test theory (the Testian 1.0) we get the item reliability ($0.730 \sim 0.765$), and using the item response theory (the Bayesian 1.0) we get the item difficulty ( $-2.32\sim0.83$ ) and discrimination ( $0.55\sim 2.71$). From results, we find out what and why students could not understand well.

  • PDF

Computer Adaptive Testing Method for Measuring Disability in Patients With Back Pain

  • Choi, Bongsam
    • Physical Therapy Korea
    • /
    • v.19 no.3
    • /
    • pp.124-131
    • /
    • 2012
  • Most conventional instruments measuring disability rely on total score by simply adding individual item responses, which is dependent on the items chosen to represent the underlying construct (test-dependent) and a test statistic, such as coefficient alpha for the estimate of reliability, varying from sample to sample (sample-dependent). By contrast, item response theory (IRT) method focuses on the psychometric properties of the test items instead of the instrument as a whole. By estimating probability that a respondent will select a particular rating for an item, item difficulty and person ability (or disability) can be placed on same linear continuum. These estimates are invariant regardless of the item used (test-free measurement) and the ability of sample applied (sample-free measurement). These advantages of IRT allow the creation of invariantly calibrated large item banks that precisely discriminate the disability levels of individuals. Computer adaptive testing (CAT) method often requiring a testing algorithm promise a means for administering items in a way that is both efficient and precise. This method permits selectively administering items that are closely matched to the ability level of individuals (measurement precision) and measuring the ability without the loss of precision provided by the full item bank (measurement efficiency). These measurement properties can reasonably be achieved using IRT and CAT method. This article aims to investigate comprehensive overview of the existing disability instrument for back pain and to inform physical therapists of an alternative innovative way overcoming the shortcomings of conventional disability instruments. An understanding of IRT and CAT method will equip physical therapist with skills in interpreting the measurement properties of disability instruments developed using the methods.

Longitudinal Study about Science Process Skills Item Forms Transition before and after Scholastic Ability Test for College (과학(科學) 탐구능력(探究能力) 평가(評價) 문항(問項) 유형(類型) 변화(變化)에 관(關)한 종단적(縱斷的) 연구(硏究))

  • Woo, Jong-Ok;Lee, Hang-Ro;Goo, Chang-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.16 no.3
    • /
    • pp.314-328
    • /
    • 1996
  • This study investigated the literature about science process skills' evaluation to analyse transition of evaluation objectives before and after a Scholastic Ability Test for College Entrance. In the literature survey the researcher established a 3 dimensional science assessment framework with X axis as science concept, Y axis as science process skills and Z axis as problem context. In order to analysis and compare each items the researcher selected 210 items from the 1st to the 7th trials and 138 items from the 1st to 4th Scholastic Ability Test for College Entrance and sampled 2873 science achievement test items from 10 high schools. In accordance with this taxonomy the researcher analysed and compared science process skills item forms. The following results were drawn : The items were evenly distributed in all the four areas(Earth Science, Biology, Physics and Chemistry) of the science concept domain, but they were heavily concentrated on data analysis and drawing a conclusion in science process domain. In the domain of problem context school context was the majority. In spite of distribution like this the ratio on science process skills measurement items and science achievement test items was increased after the Scholastic Ability Test for College Entrance was given. Also the ratio on item expression type was increased. Item form was almost 5 options selection type in the national level test. Although there were 4 options selection type, 5 options selection type, short answer type, essay type in school level test, rising from 33.1% to 65.5% on 5 options selection type is exhibited. This study showed that the school level item form was better various than the nation level. This point like this is the evidence for the improvement toward the science process skills test and influenced by Scholastic Ability Test for College Entrance. The ratio on the item which joined with the 3 axes had a mean of 99.3% in nation level test and mean 44.9% in school achievement test level. But the ratio in the school achievement test level increased after the Scholastic Ability Test for College Entrance was given. In view of this study we must furthermore study the item types which can evaluate valjdately science process skill's five stage each and evaluation method by the high school students' problem solving patterns and features in scientific inquiry on all science process skills elements.

  • PDF

Selection of Important Variables in the Classification Model for Successful Flight Training (조종사 비행훈련 성패예측모형 구축을 위한 중요변수 선정)

  • Lee, Sang-Heon;Lee, Sun-Doo
    • IE interfaces
    • /
    • v.20 no.1
    • /
    • pp.41-48
    • /
    • 2007
  • The main purpose of this paper is cost reduction in absurd pilot positive expense and human accident prevention which is caused by in the pilot selection process. We use classification models such as logistic regression, decision tree, and neural network based on aptitude test results of 505 ROK Air Force applicants in 2001~2004. First, we determine the reliability and propriety against the aptitude test system which has been improved. Based on this conference flight simulator test item was compared to the new aptitude test item in order to make additional yes or no decision from different models in terms of classification accuracy, ROC and Response Threshold side. Decision tree was selected as the most efficient for each sequential flight training result and the last flight training results predict excellent. Therefore, we propose that the standard of pilot selection be adopted by the decision tree and it presents in the aptitude test item which is new a conference flight simulator test.

Psychometric Properties and Item Evaluation of Korean Version of Night Eating Questionnaire (KNEQ) (한국어판 야식증후군 측정도구의 신뢰도, 타당도 및 문항반응이론에 의한 문항분석)

  • Kim, Beomjong;Kim, Inja;Choi, Heejung
    • Journal of Korean Academy of Nursing
    • /
    • v.46 no.1
    • /
    • pp.109-117
    • /
    • 2016
  • Purpose: The aim of this study was to develop a Korean version of Night Eating Questionnaire (KNEQ) and test its psychometric properties and evaluate items according to item response theory. Methods: The 14-item NEQ as a measure of severity of the night eating syndrome was translated into Korean, and then this KNEQ was evaluated. A total of 1171 participants aged 20 to 50 completed the KNEQ on the Internet. To test reliability and validity, Cronbach's alpha, correlation, simple regression, and factor analysis were used. Each item was analyzed according to Rasch-Andrich rating scale model and item difficulty, discrimination, infit/outfit, and point measure correlation were evaluated. Results: Construct validity was evident. Cronbach's alpha was .78. The items of evening hyperphagia and nocturnal ingestion showed high ability in discriminating people with night eating syndrome, while items of morning anorexia and mood/sleep provided relatively little information. The results of item analysis showed that item2 and item7 needed to be revised to improve the reliability of KNEQ. Conclusion: KNEQ is an appropriate instrument to measure severity of night eating syndrome with good validity and reliability. However, further studies are needed to find cut-off scores to screen persons with night eating syndrome.

Development of a Descriptive Paper Test Item and a Counting Formula for Evaluating Elementary School Students' Scientific Hypothesis Generating Ability (초등학생의 과학적 가설생성능력 평가를 위한 서술형 지필과제 및 가설생성능력지수 산출식의 개발)

  • Jo, Eun Byul;Shin, Dong Hoon
    • Journal of Korean Elementary Science Education
    • /
    • v.35 no.2
    • /
    • pp.137-149
    • /
    • 2016
  • The purpose of this study is to develop a descriptive paper test item which can evaluate elementary school students' HGA (scientific Hypothesis Generating Ability) and to propose a counting formula that can easily assess student's HGA objectively and quantitatively. To make the test item can possibly evaluate all the students from 6th graders to 3rd graders, the 'rabbit's ear' item is developed. Developed test item was distributed to four different elementary schools in Seoul. Total 280 students who were in the 6th grade solved the item. All the students' reponses to the item were analyzed. Based on the analyzed data evaluation factors and evaluation criteria are extracted to design a Hypothesis Generating ability Quotient (HGQ). As the result 'Explican's Degree of Likeness' and 'Hypothesis' Degree of Explanation' are chosen as evaluation factors. Also precedent evaluation criteria were renewed. At first, Explican's Degree of Likeness evaluation criterion was turned four levels into three levels and each content of evaluation criterion is also modified. Secondly, new evaluation factor 'Hypothesis' Degree of Explanation' was developed as combined three different evaluation criteria, 'level of explican', 'number of explican' and 'structure of explican'. This evaluation factor was designed to assess how the suggested hypothesis can elaborately explain the cause of one phenomenon. Newly designed evaluation factors and evaluation criteria can assess HGA more in detail and reduce the scoring discordant through the markers. Lastly, Developed counting formula is much more simple than precedent Kwon's equation for evaluating the Hypothesis Explanation Quotient. So it could help easily distinguish one student's scientific hypothesis generating ability.