• Title/Summary/Keyword: EVO 2016

Search Result 2, Processing Time 0.017 seconds

Analyzing the Status of Female Characters in Fighting Action Games: Focus on EVO 2016 (대전액션게임에서 나타나는 여성 캐릭터들의 지위 분석: EVO 2016을 중심으로)

  • Han, Sukhee
    • Journal of Korea Game Society
    • /
    • v.16 no.5
    • /
    • pp.79-88
    • /
    • 2016
  • This study analyzes the status of female characters in several fighting action games. From July 15 to 17, 2016, the world-famous fighting action game tournament EVO 2016 was held in Las Vegas, Nevada. In this tournament, nine fighting action games were selected for competition, and this study examines the status of female characters on the basis of 1) Appearance 2) Ability in these fighting actions games. After doing comparison between female characters and male characters in the fighting action game, it reflects on the status and right of female characters in the virtual world.

Scoring Korean Written Responses Using English-Based Automated Computer Scoring Models and Machine Translation: A Case of Natural Selection Concept Test (영어기반 컴퓨터자동채점모델과 기계번역을 활용한 서술형 한국어 응답 채점 -자연선택개념평가 사례-)

  • Ha, Minsu
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.3
    • /
    • pp.389-397
    • /
    • 2016
  • This study aims to test the efficacy of English-based automated computer scoring models and machine translation to score Korean college students' written responses on natural selection concept items. To this end, I collected 128 pre-service biology teachers' written responses on four-item instrument (total 512 written responses). The machine translation software (i.e., Google Translate) translated both original responses and spell-corrected responses. The presence/absence of five scientific ideas and three $na{\ddot{i}}ve$ ideas in both translated responses were judged by the automated computer scoring models (i.e., EvoGrader). The computer-scored results (4096 predictions) were compared with expert-scored results. The results illustrated that no significant differences in both average scores and statistical results using average scores was found between the computer-scored result and experts-scored result. The Pearson correlation coefficients of composite scores for each student between computer scoring and experts scoring were 0.848 for scientific ideas and 0.776 for $na{\ddot{i}}ve$ ideas. The inter-rater reliability indices (Cohen kappa) between computer scoring and experts scoring for linguistically simple concepts (e.g., variation, competition, and limited resources) were over 0.8. These findings reveal that the English-based automated computer scoring models and machine translation can be a promising method in scoring Korean college students' written responses on natural selection concept items.