• Title/Summary/Keyword: Language Training

Search Result 685, Processing Time 0.023 seconds

Suggestions for Developing On-Line In-Service English Teacher Training Program

  • Lee, Byeong-Cheon
    • International Journal of Contents
    • /
    • v.13 no.3
    • /
    • pp.32-37
    • /
    • 2017
  • The development of Information and Communications Technology (ICT) has changed the manner in which English teachers are taught as instructors, as well as English language learners. Online-related technology, a product of development of ICT technology, is used for various purposes such as training teachers and enhancing professionalism of current teachers to practice more efficient training. The purpose of this study is to extract common domains and sub-domains related to development of cyber-type in-service training (INSET) through domestic and international case analyses and to develop content areas of INSET using cyberspace or efficient online INSET programs. To accomplish the purpose of this study, domestic and foreign cases were analyzed in relation to direction of development of online English teacher training programs.

Examining Teachers' Beliefs about Teaching English in a Teacher Training Program

  • Yang, Eun-Mi
    • English Language & Literature Teaching
    • /
    • no.3
    • /
    • pp.71-93
    • /
    • 1997
  • Teachers' beliefs about teaching English are reflected in their practices in the classroom. They influence on the students' attitude to English learning. Any teacher training program expects the trainees to change or modify their existing beliefs and attitude through the new ideas and information introduced by the program toward a desired direction. The present study describes a teacher training program for elementary school English teachers and compares the beliefs of the teachers about teaching English before and after the training. The subjects are the elementary school English teachers around Chungnam area who get a special training of 120 hours during January 1997. The investigation of the subjects' beliefs on English teaching is conducted through examining two journals of each subject before and after the training. The journals show the teachers' inner flow of thought, so teacher trainers are expected to get insight on their general instructional considerations and have implications on the future teacher training program through examining these journals. In addition, the journal writing itself gives the teachers opportunity to reflect their practice and rethink about their beliefs, and develop themselves as professional English teachers.

  • PDF

Examining the Effects of Trained Peer Feedback on EFL Students' Writing

  • Kim, Bo-Ram
    • English Language & Literature Teaching
    • /
    • v.15 no.2
    • /
    • pp.151-168
    • /
    • 2009
  • The present study investigates the impact of trained peer feedback on the quantity and quality of revisions made by EFL students at a low-intermediate level. Peer review training was carried out in experimental group through four in-class training sessions and four peer dyad-instructor conferences after class. Students' $1^{st}$ drafts with written peer feedback and revised drafts prior to and post training were collected and analyzed. Results reveal that after training the students produced more revisions in response to their peer comments (96% of total revisions) and those revisions were counted as enhanced in quality (93% of peer-triggered revisions). In contrast, the results of paired t-test within control group indicate that there was no significant difference between two data collected from week 3 and week 16 (t = -.57, df =19, p = .577 at p < .05). The findings suggest that training as an ongoing process of teacher intervention contributes to effectiveness of the peer feedback activity. The study provides pedagogical implications for how to structure and implement peer review training for the sake of its direct strength in an EFL writing class.

  • PDF

A Study on the Preferences of Dental Technology Students for Overseas Employment (치기공과 학생들의 해외취업에 대한 선호도 조사)

  • Kim, Im-Sun;Kim, Jeong-Sook
    • Journal of Technologic Dentistry
    • /
    • v.34 no.3
    • /
    • pp.303-314
    • /
    • 2012
  • Purpose: This study aimed to find overseas workplace and improve global competence through the preference survey on overseas employment by dental technology students. Methods: The survey sample consisted of 250 randomly selected dental technology students. Survey was conducted from March 1 to May 1 in 2012. Total of 245(98.0%) replies and analyzed 236 questionnaires excluding 9 incomplete questionnaires. The questionnaires used in this study consisted of 7 items for general information, overseas employment characteristics of 10 items, 7 items for overseas employment activation plane and job competency development of 7 items. Collected data were analyzed using SPSS(Statistical Package for Social Sciences) Win 19.0 statistics program. Results: Regarding general characteristics of the subjects, there were 131 third graders(55.5%), 63 first graders(26.7%) and 42 second graders(17.8%) among 130 males(55.1%) and 106 females(44.9%). 221(93.6%) of the subjects had no experience in language training. Students who had clinical training for 1-5 months were 123(52.1%), and 24(10.2%) students had more than six months. 89(37.7%) of the subjects had no clinical training. 155(65.7%) of the subjects hope to work with korean owner, and 81(34.3%) chose foreign owner. Favored working countries were Australia(41.5%), the United States(29.2%), Canada(18.2%) and other(11.0%). The field of dental ceramic was indicated to be the highest proportion of 104(44.1%). Period of training were 3 hours(40.3%) and 6 hours(35.2%). The most important training were language-centered education(54.2%), Job-oriented education(24.2%), local culture education(16.1%), other(3.0%) and Leadership Training(2.5%). The subjects chose overseas worker(44.9%), working-level practitioner (28.8%), successfully employed dental technology graduate(19.5%a) and professor(3.4%) as an instructor. The subjects get education and training information from professor(40.3%), other(28.0%), senior(14.4%), job site(8.9%) and acquaintance(8.5%). A credit exchange(2.46 points), a joint degree program(2.46 points), and a foreign professor(2.33 points) were needed to activate the overseas employment. A kind of dental prosthesis(3.58 points), carving tooth morphology(3.38 points), and majors of dental technology(3.30 points) were indicated to develop job competency for overseas employment. Age, year, clinical training experience and company owner were statistically meaningful data among the general characteristics affecting job competency development. Conclusion: The college needs to offer variety programs such as foreign language-centered education and a local job competency development program to graduates to be connected with international workplace and employment.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Survey on Open Source based Large Language Models (오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이)

  • Ha-Young Joo;Hyeontaek Oh;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.193-202
    • /
    • 2023
  • In recent years, the outstanding performance of large language models (LLMs) trained on extensive datasets has become a hot topic. Since studies on LLMs are available on open-source approaches, the ecosystem is expanding rapidly. Models that are task-specific, lightweight, and high-performing are being actively disseminated using additional training techniques using pre-trained LLMs as foundation models. On the other hand, the performance of LLMs for Korean is subpar because English comprises a significant proportion of the training dataset of existing LLMs. Therefore, research is being carried out on Korean-specific LLMs that allow for further learning with Korean language data. This paper identifies trends of open source based LLMs and introduces research on Korean specific large language models; moreover, the applications and limitations of large language models are described.

Research Trends in Large Language Models and Mathematical Reasoning (초거대 언어모델과 수학추론 연구 동향)

  • O.W. Kwon;J.H. Shin;Y.A. Seo;S.J. Lim;J. Heo;K.Y. Lee
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multistep problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Yang, Sung-Min;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.29-35
    • /
    • 2020
  • In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

Europass and the CEFR: Implications for Language Teaching in Korea

  • Finch, Andrew Edward
    • English Language & Literature Teaching
    • /
    • v.15 no.2
    • /
    • pp.71-92
    • /
    • 2009
  • Europass was established in 2005 by the European Parliament and the Council of Europe as a single framework for language qualifications and competences, helping citizens to gain accreditation throughout the European Community. In addition, the 1996 Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR) provides a common basis for language syllabi, curriculum guidelines, examination, and textbooks in Europe. This framework describes the required knowledge and skills, the cultural context, and the levels of proficiency that learners should achieve. In combination, Europass and the CEFR provide employers and educational institutes with internationally recognized standards. This paper proposes that current trends such as globalization and international mobility require a similar approach to accreditation in Asia. As jobs and workers become independent of national boundaries and restrictions, it becomes necessary to educate students as multilingual world citizens, using standards that are accepted around the world. It is suggested, therefore, that assessment models such as Europass and the CEFR, along with successful language teaching models in Europe and Canada, present opportunities of adaptation for the Korean education system. Finally, rigorous teacher training to internationally recognized levels is recommended, if Korea is to produce a workforce of highly-skilled, plurilingual world citizens.

  • PDF