• Title/Summary/Keyword: language proficiency testing

Search Result 13, Processing Time 0.019 seconds

A comparative study of English test items of college entrance examinations in Korea, China, and Japan (한국.중국.일본의 대학입학 영어시험 문항 비교 연구)

  • Jeon, Byoung-Man
    • English Language & Literature Teaching
    • /
    • v.10 no.2
    • /
    • pp.113-132
    • /
    • 2004
  • This study aims to suggest desirable directions through analyzing English test items of college entrance examinations(CEE) in Korea, China, and Japan. To achieve this, English test items of Scholastic Ability Test(SAT) in Korea were compared with those of CEE in China and Japan, and test items of TOEFL and IELTS. It was found that there were not many items for testing productive skills relatively to the tests of other countries including TOEFL and IELTS. Especially, there were integrated items for writing test in China. In case of speaking test, all the other country adopted direct ways like interview and oral test, not indirect test as in the SAT in Korea. It is suggested that there need to be included test items comprising long passages in order to measure extensive reading ability. It can be suggested that doze test be adopted for testing integrated proficiency of English.

  • PDF

Study on Improving Maritime English Proficiency Through the Use of a Maritime English Platform (해사영어 플랫폼을 활용한 표준해사영어 실력 향상에 관한 연구)

  • Jin Ki Seor;Young-soo Park;Dongsu Shin;Dae Won Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.930-938
    • /
    • 2023
  • Maritime English is a specialized language system designed for ship operations, maritime safety, and external and internal communication onboard. According to the International Maritime Organization's (IMO) International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW), it is imperative that navigational officers engaged in international voyages have a thorough understanding of Maritime English including the use of Standard Marine Communication Phrases (SMCP). This study measured students' proficiency in Maritime English using a learning and testing platform that includes voice recognition, translation, and word entry tasks to evaluate the resulting improvement in Maritime English exam scores. Furthermore, the study aimed to investigate the level of platform use needed for cadets to qualify as junior navigators. The experiment began by examining the correlation between students' overall English skills and their proficiency in SMCP through an initial test, followed by the evaluation of improvements in their scores and changes in exam duration during the mid-term and final exams. The initial test revealed a significant dif erence in Maritime English test scores among groups based on individual factors, such as TOEIC scores and self-assessment of English ability, and both the mid-term and final tests confirmed substantial score improvements for the group using the platform. This study confirmed the efficacy of a learning platform that could be extensively applied in maritime education and potentially expanded beyond the scope of Maritime English education in the future.

Evaluating ChatGPT's Competency in BIM Related Knowledge via the Korean BIM Expertise Exam (BIM 운용 전문가 시험을 통한 ChatGPT의 BIM 분야 전문 지식 수준 평가)

  • Choi, Jiwon;Koo, Bonsang;Yu, Youngsu;Jeong, Yujeong;Ham, Namhyuk
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.21-29
    • /
    • 2023
  • ChatGPT, a chatbot based on GPT large language models, has gained immense popularity among the general public as well as domain professionals. To assess its proficiency in specialized fields, ChatGPT was tested on mainstream exams like the bar exam and medical licensing tests. This study evaluated ChatGPT's ability to answer questions related to Building Information Modeling (BIM) by testing it on Korea's BIM expertise exam, focusing primarily on multiple-choice problems. Both GPT-3.5 and GPT-4 were tested by prompting them to provide the correct answers to three years' worth of exams, totaling 150 questions. The results showed that both versions passed the test with average scores of 68 and 85, respectively. GPT-4 performed particularly well in categories related to 'BIM software' and 'Smart Construction technology'. However, it did not fare well in 'BIM applications'. Both versions were more proficient with short-answer choices than with sentence-length answers. Additionally, GPT-4 struggled with questions related to BIM policies and regulations specific to the Korean industry. Such limitations might be addressed by using tools like LangChain, which allow for feeding domain-specific documents to customize ChatGPT's responses. These advancements are anticipated to enhance ChatGPT's utility as a virtual assistant for BIM education and modeling automation.