• Title/Summary/Keyword: Language Learning

Search Result 2,250, Processing Time 0.025 seconds

Vocabulary Education for Korean Beginner Level Using PWIM (PWIM 활용 한국어 초급 어휘교육)

  • Cheng, Yeun sook;Lee, Byung woon
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.325-344
    • /
    • 2018
  • The purpose of this study is to summarize PWIM (Picture Words Inductive Model) which is one of learner-centered vocabulary teaching-learning models, and suggest ways to implement them in Korean language education. The pictures that are used in the Korean language education field help visualize the specific shape, color, and texture of the vocabulary that is the learning target; thus, helping beginner learners to recognize the meaning of the sound. Visual material stimulates the intrinsic schema of the learner and not only becomes a 'bridge' connecting the mother tongue and the Korean language, but also reduces difficulty in learning a foreign language because of the ambiguity between meaning and sound in Korean and all languages. PWIM shows commonality with existing learning methods in that it uses visual materials. However, in the past, the teacher-centered learning method has only imitated the teacher because the teacher showed a piece-wise, out-of-life photograph and taught the word. PWIM is a learner-centered learning method that stimulates learners to find vocabulary on their own by presenting visual information reflecting the context. In this paper, PWIM is more suitable for beginner learners who are learning specific concrete vocabulary such as personal identity (mainly objects), residence and environment, daily life, shopping, health, climate, and traffic. The purpose of this study was to develop a method of using PWIM suitable for Korean language learners and teaching procedures. The researchers rearranged the previous research into three steps: brainstorming and word organization, generalization of semantic and morphological rules of extracted words, and application of words. In the case of PWIM, you can go through all three steps at once. Otherwise, it is possible to divide the three steps of PWIM and teach at different times. It is expected that teachers and learners using the PWIM teaching-learning method, which uses realistic visual materials, will enable making an effective class together.

A study on Customized Foreign Language Learning Contents Construction (사용자 맞춤형 외국어학습 콘텐츠 구성을 위한 연구)

  • Kim, Gui-Jung;Yi, Jae-Il
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.189-194
    • /
    • 2019
  • This paper is a study on the methodology of making customized contents according to user 's tendency through the development of learning contents utilizing IT. A variety of learners around the world use mobile devices and mobile learning contents to conduct their learning activities in various fields, and foreign language learning is one of the typical mobile learning areas. Foreign language learning contents suggested in this study is constructed based on the learner's verbal and text information in accordance with the user's vocal tendency. It is necessary to find out a suitable method to translate the user's native language text into the target language and make it into user friendly content.

Machine Learning Language Model Implementation Using Literary Texts (문학 텍스트를 활용한 머신러닝 언어모델 구현)

  • Jeon, Hyeongu;Jung, Kichul;Kwon, Kyoungah;Lee, Insung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.427-436
    • /
    • 2021
  • The purpose of this study is to implement a machine learning language model that learns literary texts. Literary texts have an important characteristic that pairs of question-and-answer are not frequently clearly distinguished. Also, literary texts consist of pronouns, figurative expressions, soliloquies, etc. They hinder the necessity of machine learning using literary texts by making it difficult to learn algorithms. Algorithms that learn literary texts can show more human-friendly interactions than algorithms that learn general sentences. For this goal, this paper proposes three text correction tasks that must be preceded in researches using literary texts for machine learning language model: pronoun processing, dialogue pair expansion, and data amplification. Learning data for artificial intelligence should have clear meanings to facilitate machine learning and to ensure high effectiveness. The introduction of special genres of texts such as literature into natural language processing research is expected not only to expand the learning area of machine learning, but to show a new language learning method.

Effects of Facebook on Language Learning

  • SUNG, Minkyung;KWON, Sungho
    • Educational Technology International
    • /
    • v.12 no.2
    • /
    • pp.95-116
    • /
    • 2011
  • This study examines effects of Facebook on language learning in terms of facilitating interaction and collaboration by applying Facebook in a Korean language class. Forty one exchange students from seventeen countries who participated in the study used Facebook to exchange information and complete group projects. Results show that Facebook was effective in sharing class materials, engaging in class community and collaborating to complete assignments. Students also comment that socializing with peers was helpful, yet more activities and discussion to draw active participation is needed. This study also points out the important role of instructors who implement social media and manage the class.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Development of ICT Teaching-Learning Model for Supporting Subject of Korean (국어 교과 지원을 위한 ICT활용 교수.학습 모형 개발에 관한 연구)

  • Kim, Yeong-Gi;Han, Seon-Gwan;Kim, Su-Yeol
    • Journal of The Korean Association of Information Education
    • /
    • v.7 no.3
    • /
    • pp.331-339
    • /
    • 2003
  • This study is the content about the development of ICT(Information and Communication Technology) teaching-learning model to support the korean language learning. Firstly, we proposed 3 types on developing models for ICT teaching-learning in korean language learning, we developed 4 ICT teaching-learning models based on analysing the curriculum of korean language course and studying the preceding works. Moreover we offered the strategies for applying ICT teaching-learning models with korean language learning. The developed 4 ICT teaching-learning models in this study are expected that ICT will use to design ICT teaching-learning model in another courses and teachers take advantage of the 4 models in korean lesson effectively.

  • PDF

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Yang, Sung-Min;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.29-35
    • /
    • 2020
  • In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

The critical period in Korean EFL contexts and UG (한국인 EFL 학습자의 결정적 시기와 보편문법)

  • Hahn, Hye-Ryeong
    • English Language & Literature Teaching
    • /
    • no.6
    • /
    • pp.219-239
    • /
    • 2000
  • There has been a growing enthusiasm in Korea for the early education of English as a foreign language (EFL). The present study examined the validity of the Critical Period Hypothesis in terms of the Universal Grammar (UG), in three different types of learning contexts - first language (L1), second language (SL), and foreign language (FL) learning contexts. While previous research findings in L1 and SL learning contexts suggest that UG principles and parameters are accessible to language learners only for the early years of lifetime, this article argues that their results - and even the methods - cannot be applied to EFL settings and that independent studies on the EFL context are, required. It also proposes the recent UG notion of functional categories as the most appropriate subject in the discussion of Korean EFL learners' access to UG. Findings on foreign language contexts, including the author's own, strongly indicate that UG is not sensitive to learners' starting ages in FL settings. If young children in FL contexts cannot develop their interlanguage grammar based on UG, the existing teaching methods for young children should be revised.

  • PDF

Measures on Improving Korean Language Skills by Using Shadowing Techniques (섀도잉(shadowing)기법을 활용한 한국어 수업 방안)

  • Hyun, Nam Ji
    • Journal of Korean language education
    • /
    • v.29 no.2
    • /
    • pp.49-72
    • /
    • 2018
  • The purpose of this study is to introduce an efficient measure in Korean language education for learners of Korean by applying shadowing techniques which focus on improving not only listening and speaking skills but also reading and writing skills. First of all, the study discusses about the definition of shadowing along with the effect of shadowing. The second part will be about examining the proposed method related to shadowing technique which is comprised of original shadowing techniques and other techniques transformed from the original. Thirdly, the paper will be discussing background information of the shadowing technique in previous researches and experiments using shadowing techniques in Korean language education. Finally, there will be an introduction of learning measures that apply to skill unification. Most of the previous researches of the shadowing technique were limited to a few students with only mid-to-high level learners while this method could cover up to a wide range of learners. The most effective way of learning a foreign language would firstly be the suggested method and the focus should be on repetition and practice of the learners.

A Korean Language Stemmer based on Unsupervised Learning (자율 학습에 의한 실질 형태소와 형식 형태소의 분리)

  • Jo, Se-Hyeong
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.675-684
    • /
    • 2001
  • This paper describes a method for stemming of Korean language by using unsupervised learning from raw corpus. This technique does not require a lexicon or any language-specific knowledge. Since we use unsupervised learning, the time and effort required for learning is negligible. Unlike heuristic approaches that are theoretically ungrounded, this method is based on widely accepted statistical methods, and therefore can be easily extended. The method is currently applied only to Korean language, but it can easily be adapted to other agglutinative languages, since it is not language-dependent.

  • PDF