• Title/Summary/Keyword: 채점방법

Search Result 114, Processing Time 0.024 seconds

Study on scoring system of fire from warships (함정 사격 채점 시스템에 관한 연구)

  • Kim, Dong-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.339-340
    • /
    • 2016
  • 함정 함포 사격 채점은 사격함에서 발사한 탄을 타겟을 예인하고 있는 함정에서 관찰하여 이루어 진다. 채점은 함정을 운용하는 승조원의 기량 평가와 관련 있기 때문에 공정성이 중요하다. 명중 여부는 타겟 주위에 발생하는 물보라를 관측하여 결정하는데, 물보라의 경우 사격 이후 수 초 내에 없어져서 예인함의 인원이 육안으로만 관찰해서는 공정한 채점을 할 수 없다. 캠코더등의 영상기록장치를 이용하여 사격을 기록하고 컴퓨터비전을 활용하여 채점의 공정성을 높일 수 있다. 본 논문에서는 컴퓨터 시스템을 활용하여 함정 사격 채점 시스템을 구현할 때 시스템이 갖추어야 할 기능과 컴퓨터비젼의 활용방법 및 함정에 설치되어 있는 전투체계 시스템을 활용하는 방법에 대하여 알아보고 실제로 시스템을 구현해 본다.

  • PDF

Automatic Scoring System for Korean Short Answers by Student Answer Analysis and Answer Template Construction (학생 답안 분석과 정답 템플릿 생성에 의한 한국어 서답형 문항의 자동채점 시스템)

  • Kang, SeungShik;Jang, EunSeo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.5
    • /
    • pp.218-224
    • /
    • 2016
  • This paper proposes a computer-based practical automatic scoring system for Korean short answers through student answer analysis and natural language processing techniques. The proposed system reduces the overall scoring time and budget, while improving the ease-of-use to write answer templates from student answers as well as the accuracy and reliability of automatic scoring system. To evaluate the application of the automatic scoring system and compare to the human scoring process, we performed an experiment using the student answers of social science subject in 2014 National Assessment of Educational Achievement.

Design and Implementation of a Subjective-type Evaluation System Using Syntactic and Case-Role Information (구문-격의미 정보를 이용한 주관식 문제 채점 시스템 설계 및 구현)

  • Kang, Won-Seog
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.5
    • /
    • pp.61-69
    • /
    • 2007
  • The subjective-type evaluation can estimate the high-recognition ability, but has the problem of the objectivity and reliability of the evaluation, and the difficulty of Korean language processing. To solve the problem, this paper designs and implements a subjective-type evaluation system using syntactic and case-role information. This system can reduce the time and endeavor for evaluation and provide the objectivity of the evaluation. The system results the 75% success rate to the instructor evaluation and gets the better precision and recall than the word extraction evaluation system. We expect that this system will become a basis of the research on the subjective-type evaluation.

  • PDF

Scoring Korean Written Responses Using English-Based Automated Computer Scoring Models and Machine Translation: A Case of Natural Selection Concept Test (영어기반 컴퓨터자동채점모델과 기계번역을 활용한 서술형 한국어 응답 채점 -자연선택개념평가 사례-)

  • Ha, Minsu
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.3
    • /
    • pp.389-397
    • /
    • 2016
  • This study aims to test the efficacy of English-based automated computer scoring models and machine translation to score Korean college students' written responses on natural selection concept items. To this end, I collected 128 pre-service biology teachers' written responses on four-item instrument (total 512 written responses). The machine translation software (i.e., Google Translate) translated both original responses and spell-corrected responses. The presence/absence of five scientific ideas and three $na{\ddot{i}}ve$ ideas in both translated responses were judged by the automated computer scoring models (i.e., EvoGrader). The computer-scored results (4096 predictions) were compared with expert-scored results. The results illustrated that no significant differences in both average scores and statistical results using average scores was found between the computer-scored result and experts-scored result. The Pearson correlation coefficients of composite scores for each student between computer scoring and experts scoring were 0.848 for scientific ideas and 0.776 for $na{\ddot{i}}ve$ ideas. The inter-rater reliability indices (Cohen kappa) between computer scoring and experts scoring for linguistically simple concepts (e.g., variation, competition, and limited resources) were over 0.8. These findings reveal that the English-based automated computer scoring models and machine translation can be a promising method in scoring Korean college students' written responses on natural selection concept items.

Semi-Automatic Scoring for Short Korean Free-Text Responses Using Semi-Supervised Learning (준지도학습 방법을 이용한 한국어 서답형 문항 반자동 채점)

  • Cheon, Min-Ah;Seo, Hyeong-Won;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Lim, EunYoung
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.2
    • /
    • pp.147-165
    • /
    • 2015
  • Through short-answer questions, we can reflect the depth of students' understanding and higher-order thinking skills. Scoring for short-answer questions may take long time and may be an issue on consistency of grading. To alleviate such the suffering, automated scoring systems are widely used in Europe and America, but are in the initial stage in research in Korea. In this paper, we propose a semi-automatic scoring system for short Korean free-text responses using semi-supervised learning. First of all, based on the similarity score between students' answers and model answers, the proposed system grades students' answers and the scored answers with high reliability have been included in the model answers through the thorough test. This process repeats until all answers are scored. The proposed system is used experimentally in Korean and social studies in Nationwide Scholastic Achievement Test. We have confirmed that the processing time and the consistency of grades are promisingly improved. Using the system, various assessment methods have got to be developed and comparative studies need to be performed before applying to school fields.

Subjective Tests Sub-System Applied with Generalized Vector Space Model (일반화된 벡터 공간 모델을 적용한 주관식 문제 채점 보조 시스템)

  • Oh, Jung-Seok;Chu, Seung-Woo;Kim, Yu-Seop;Lee, Jae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.965-968
    • /
    • 2004
  • 기존의 주관식 문제 채점 보조 시스템은 자연어 처리의 어려움으로 인해 채점의 자동화가 어려워 전자우편 등을 통하여 채점자에게 채점 의뢰를 하는 수준이었다. 본 논문에서는 이러한 문제점을 해결하기 위하여 문제 공간을 벡터 공간으로 정의하고 벡터를 구성하는 각 자질간의 상관관계를 고려한 방법을 적용하였다. 먼저 학습자가 답안을 작성할 때 동의어 사용을 한다는 가정하에 출제자가 여러 개의 모범 답안을 작성하고 이들 답안을 말뭉치에 첨가하여 구성한 다음 형태소 분석기를 통하여 색인을 추출한다. 그리고 학습자가 작성한 답안 역시 색인을 추출한 다음, 이들 색인들을 각 자질로 정의한 벡터를 구성한다. 이렇게 구성된 벡터들을 이용하여 답안들간 유사도 측정을 하고, 유사도 범위에 따라 답안을 자동으로 정답과 오답으로 분류하려는 시스템을 제안한다. 170 문항의 주관식 문제을 제안된 방법으로 실험하여, 기존 모델에 비해 성능과 신뢰성 향상을 이룰 수 있었다.

  • PDF

Design and Implementation of a Subjective-type Evaluation System Using Natural Language Processing Technique (유의어 사전을 이용한 주관식 문제 채점 시스템 설계 및 구현)

  • Park, HeeJung;Kang, WonSeog
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.3
    • /
    • pp.207-216
    • /
    • 2003
  • An instructor in education generally takes the objective-type evaluation for grading. The subjective-type evaluation has the merit that it can estimate the high-recognition ability, but the problem of the objectivity and reliability of the evaluation. This paper proposes the model which grades for the subjective-type evaluation. and designs and implements the evaluation system using the synonym thesaurus. This system can process the diverse and wide subjective-type questions and provide the easy usage for a beginner. It also can reduce the time and endeavor for evaluation and provide the objectivity of the evaluation. The system results the 73% success rate. We expect that this system will become a basis of the research on the subjective-type evaluation.

  • PDF

Automated Scoring of Argumentation Levels and Analysis of Argumentation Patterns Using Machine Learning (기계 학습을 활용한 논증 수준 자동 채점 및 논증 패턴 분석)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.3
    • /
    • pp.203-220
    • /
    • 2021
  • We explored the performance improvement method of automated scoring for scientific argumentation. We analyzed the pattern of argumentation using automated scoring models. For this purpose, we assessed the level of argumentation for student's scientific discourses in classrooms. The dataset consists of four units of argumentation features and argumentation levels for episodes. We utilized argumentation clusters and n-gram to enhance automated scoring accuracy. We used the three supervised learning algorithms resulting in 33 automatic scoring models. As a result of automated scoring, we got a good scoring accuracy of 77.59% on average and up to 85.37%. In this process, we found that argumentation cluster patterns could enhance automated scoring performance accuracy. Then, we analyzed argumentation patterns using the model of decision tree and random forest. Our results were consistent with the previous research in which justification in coordination with claim and evidence determines scientific argumentation quality. Our research method suggests a novel approach for analyzing the quality of scientific argumentation in classrooms.

Automated Scoring of Scientific Argumentation Using Expert Morpheme Classification Approaches (전문가의 형태소 분류를 활용한 과학 논증 자동 채점)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.3
    • /
    • pp.321-336
    • /
    • 2020
  • We explore automated scoring models of scientific argumentation. We consider how a new analytical approach using a machine learning technique may enhance the understanding of spoken argumentation in the classroom. We sampled 2,605 utterances that occurred during a high school student's science class on molecular structure and classified the utterances into five argumentative elements. Next, we performed Text Preprocessing for the classified utterances. As machine learning techniques, we applied support vector machines, decision tree, random forest, and artificial neural network. For enhancing the identification of rebuttal elements, we used a heuristic feature-engineering method that applies experts' classification of morphemes of scientific argumentation.

Automated Scoring System for Korean Short-Answer Questions Using Predictability and Unanimity (기계학습 분류기의 예측확률과 만장일치를 이용한 한국어 서답형 문항 자동채점 시스템)

  • Cheon, Min-Ah;Kim, Chang-Hyun;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Song, Mi-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.527-534
    • /
    • 2016
  • The emergent information society requires the talent for creative thinking based on problem-solving skills and comprehensive thinking rather than simple memorization. Therefore, the Korean curriculum has also changed into the direction of the creative thinking through increasing short-answer questions that can determine the overall thinking of the students. However, their scoring results are a little bit inconsistency because scoring short-answer questions depends on the subjective scoring of human raters. In order to alleviate this point, an automated scoring system using a machine learning has been used as a scoring tool in overseas. Linguistically, Korean and English is totally different in the structure of the sentences. Thus, the automated scoring system used in English cannot be applied to Korean. In this paper, we introduce an automated scoring system for Korean short-answer questions using predictability and unanimity. We also verify the practicality of the automatic scoring system through the correlation coefficient between the results of the automated scoring system and those of human raters. In the experiment of this paper, the proposed system is evaluated for constructed-response items of Korean language, social studies, and science in the National Assessment of Educational Achievement. The analysis was used Pearson correlation coefficients and Kappa coefficient. Results of the experiment had showed a strong positive correlation with all the correlation coefficients at 0.7 or higher. Thus, the scoring results of the proposed scoring system are similar to those of human raters. Therefore, the automated scoring system should be found to be useful as a scoring tool.