• Title/Summary/Keyword: Language Training

Search Result 689, Processing Time 0.029 seconds

Applications of English Education with Remote Wireless Mobile Devices (무선 원격 시스템의 모바일 장치를 이용한 영어 학습 방법 연구)

  • Lee, Il Suk
    • Journal of Digital Contents Society
    • /
    • v.14 no.2
    • /
    • pp.255-262
    • /
    • 2013
  • Useful applications for English education enable immediate conversion of mobile devices into remote wireless systems for classroom computers. Once the free software has been installed in the main computers in the classroom, using powerpoint, students can operate the computers through their mobile devices by installing Air mouse on them. By using this, the students can draw or write on the "board" to manipulate the educational resources from where they are/from their seats. The study of English language encompasses not only academic study but also language training. Until recently, the issue of the English language learning has been ridden with certain problems-instead of being a tool that facilitates communication, its main purpose has been for school grades, TOEIC, and TOEFL. This study suggests English language learning methodology using various applications such as mobile, VOD English language content, and movie scripts in implementing easy and fun English language learning activities that can be studied regularly. This is operationalized by setting a specific limit on learning and by using various media such as podcast, Apps, to increase interest, motivation, and self-directed learning in a passive learning environment.

Convergent Web-based Education Program to Prevent Dementia (웹기반의 치매 예방용 융합교육 프로그램 개발)

  • Park, Kyung-Soon;Park, Jae-Seong;Ban, Keum-Ok;Kim, Kyoung-Oak
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.11
    • /
    • pp.322-331
    • /
    • 2013
  • The purpose of the present study was to develop a convergent education contents for dementia prevention, operating on the web network applying modern information technology(IT). At the preparation stage, local and worldwide literatures related to dementia were analyzed followed by surveying industry demands, based on which the program was designed and developed. In the following enhancement stage, the program was modified as much as possible by advices obtained from experts in various fields. Development results of the present program are summarized as follows. Firstly, 645 intellect development model to prevent dementia was established through peer review and verification of convergent education theories by expert groups. This model was named as "Garisani" meaning "cognition capable of judging objects" in the Korean language. Secondly, 'Find a way' and 'Connect a line' modules were developed in the numeric field as well as 'Identify a letter(I, II)' modules, in the language field for web-based left brain training program. Thirdly, 'Find my car' and 'Vision training' modules in the attention field and 'Object inference' and 'Compare pictures' modules in the cognition field were developed for web-based right brain training program. Fourth, 'Pentomino' and 'BQmaze'(Brain Quotient and maze) modules in the space perception field and 'Visual training' in the memory field were developed for web-based left and right brains training. Fifth, all results were integrated leading to a 52 week Garisani convergent education program for dementia prevention.

Generating Sponsored Blog Texts through Fine-Tuning of Korean LLMs (한국어 언어모델 파인튜닝을 통한 협찬 블로그 텍스트 생성)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-12
    • /
    • 2024
  • In this paper, we fine-tuned KoAlpaca, a large-scale Korean language model, and implemented a blog text generation system utilizing it. Blogs on social media platforms are widely used as a marketing tool for businesses. We constructed training data of positive reviews through emotion analysis and refinement of collected sponsored blog texts and applied QLoRA for the lightweight training of KoAlpaca. QLoRA is a fine-tuning approach that significantly reduces the memory usage required for training, with experiments in an environment with a parameter size of 12.8B showing up to a 58.8% decrease in memory usage compared to LoRA. To evaluate the generative performance of the fine-tuned model, texts generated from 100 inputs not included in the training data produced on average more than twice the number of words compared to the pre-trained model, with texts of positive sentiment also appearing more than twice as often. In a survey conducted for qualitative evaluation of generative performance, responses indicated that the fine-tuned model's generated outputs were more relevant to the given topics on average 77.5% of the time. This demonstrates that the positive review generation language model for sponsored content in this paper can enhance the efficiency of time management for content creation and ensure consistent marketing effects. However, to reduce the generation of content that deviates from the category of positive reviews due to elements of the pre-trained model, we plan to proceed with fine-tuning using the augmentation of training data.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Analysis of Korean Language Parsing System and Speed Improvement of Machine Learning using Feature Module (한국어 의존 관계 분석과 자질 집합 분할을 이용한 기계학습의 성능 개선)

  • Kim, Seong-Jin;Ock, Cheol-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.66-74
    • /
    • 2014
  • Recently a variety of study of Korean parsing system is carried out by many software engineers and linguists. The parsing system mainly uses the method of machine learning or symbol processing paradigm. But the parsing system using machine learning has long training time because the data of Korean sentence is very big. And the system shows the limited recognition rate because the data has self error. In this thesis we design system using feature module which can reduce training time and analyze the recognized rate each the number of training sentences and repetition times. The designed system uses the separated modules and sorted table for binary search. We use the refined 36,090 sentences which is extracted by Sejong Corpus. The training time is decreased about three hours and the comparison of recognized rate is the highest as 84.54% when 10,000 sentences is trained 50 times. When all training sentence(32,481) is trained 10 times, the recognition rate is 82.99%. As a result it is more efficient that the system is used the refined data and is repeated the training until it became the steady state.

A Dialogic Picturebook Reading Program : Effects on Teacher-Toddler Interactions and on Toddler Language (영아를 위한 대화식 그림책읽기 교사교육 프로그램의 효과)

  • Lee, Mee Hwa;Kim, Myoung Soon
    • Korean Journal of Child Studies
    • /
    • v.25 no.2
    • /
    • pp.41-57
    • /
    • 2004
  • Subjects were 88 two-year-old-toddlers(25-36 months of age) and 32 teachers in 13 childcare centers; they were randomly assigned to experimental or control groups. The researcher observed teacher-toddler interaction in the picturebook reading situation. Analysis of patterns of teachers' verbal behavior and coding of toddlers' verbal and nonverbal behaviors were based on Senechal, et al.(1995) and Whitehurst, et al.(1988), respectively. In comparison with the control group, toddlers of the experimental group showed significant differences in verbal behavior; they acquired nouns occurring in the picturebooks and more expressive and comprehensive language. After training intervention, teachers of the experimental group showed changes in quality and quantity of verbal behavior.

  • PDF

Japanese Adults' Perceptual Categorization of Korean Three-way Distinction (한국어 3중 대립 음소에 대한 일본인의 지각적 범주화)

  • Kim, Jee-Hyun;Kim, Jung-Oh
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.163-167
    • /
    • 2005
  • Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Perceptual assimilation patterns by Japanese listeners of the three-way voicing distinction in Korean syllable-initial obstruent consonants were assessed directly. According to Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM), the resulting perceptual assimilation pattern predicts relative difficulty in discrimination between lenis and aspirated consonants, and relative ease in the discrimination of fortis. This study compared the effects of two different training conditions on Japanese adults’perceptual categorization of Korean three-way distinction. In one condition, participants were trained to discriminate lenis and aspirated consonants which were predicted to be problematic, whereas in another condition participants were trained with all three classes of 'learnability' did not seem to depend lawfully on the perceived cross-language similarity of Korean and Japanese consonants.

  • PDF

A Noun Extractor based on Dictionaries and Heuristic Rules Obtained from Training Data (학습데이터를 이용하여 생성한 규칙과 사전을 이용한 명사 추출기)

  • Jang, Dong-Hyun;Myaeng, Sung-Hyon
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10d
    • /
    • pp.151-156
    • /
    • 1999
  • 텍스트로부터 명사를 추출하기 위해서 다양한 기법이 이용될 수 있는데, 본 논문에서는 학습 데이터를 이용하여 생성한 규칙과 사전을 이용하는 단순한 모델을 통해 명사를 효과적으로 추출할 수 있는 기법에 대하여 기술한다. 사용한 모델은 기본적으로 명사, 어미, 술어 사전을 사용하고 있으며 명사 추정은 학습 데이터를 통해 생성한 규칙을 통해 이루어진다. 제안한 방법은 복잡한 언어학적 분석 없이 명사 추정이 가능하며, 복합명사 사전을 이용하지 않고 복합 명사를 추정할 수 있는 장점을 지니고 있다. 또한, 명사추정의 주 요소인 규칙이나 사전 등록어의 추가, 갱신 등이 용이하며, 필요한 경우에는 특정 분야의 텍스트 분석을 위한 새로운 사전의 추가가 가능하다. 제안한 방법을 이용해 "제1회 형태소 분석기 및 품사 태거 평가대회(MATEC '99')"의 명사 추출기 분야에 참가하였으며, 본 논문에서는 성능평가 결과를 제시하고 평가결과에 대한 분석을 기술하고 있다. 또한, 현재의 평가기준 중에서 적합하지 않은 부분을 규정하고 이를 기준으로 삼아 자체적으로 재평가한 평가결과를 제시하였다.

  • PDF

PESAA - Computer Assisted English Speaking Training system (PESAA - 컴퓨터 보조 영어 말하기 훈련 시스템)

  • Bang, Jeesoo;Lee, Jonghoon;Kang, Sechun;Lee, Geunbae Gary
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.73-76
    • /
    • 2012
  • 영어 교육의 필요성이 증가하고 그에 대한 수요가 늘어남에 따라 컴퓨터를 이용한 외국어 교육 시스템이 개인적인 영어 교육방법으로 소개되고 있다. 새로운 외국어를 접할 때 습득하기 어려운 부분 중 하나가 발음이고, 발음이 외국어 말하기 실력에 중요한 요소이기 때문에 특별한 훈련이 필요하다. 본 논문에서는 이러한 문제점에 대하여 충분히 인지하고 외국어 발음 향상에 도움을 주기 위하여 컴퓨터 보조 발음 훈련시스템을 개발하였다. 본 시스템은 발음 훈련과 억앙 훈련, 즉 문장 강세 훈련과 끊어 읽기 훈련을 포함하며, 사용자의 발화에 대해 적절한 평가와 피드백을 제공한다. 본 논문에서는 발음 훈련 시스템의 구성요소와 동작에 대하여 중점적으로 기술하였다.

  • PDF

Deep Learning Based Causal Relation Extraction with Expansion of Training Data (학습 데이터 확장을 통한 딥러닝 기반 인과관계 추출 모델)

  • Lee, Seungwook;Yu, Hongyeon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.61-66
    • /
    • 2018
  • 인과관계 추출이란 어떠한 문장에서 인과관계가 존재하는지, 인과관계가 존재한다면 원인과 결과의 위치까지 분석하는 것을 말한다. 하지만 인과관계 관련 연구는 그 수가 적기 때문에 말뭉치의 수 또한 적으며, 기존의 말뭉치가 존재하더라도 인과관계의 특성상 새로운 도메인에 적용할 때마다 데이터를 다시 구축해야 하는 문제가 있다. 따라서 본 논문에서는 도메인 특화에 따른 데이터 구축비용 문제를 최소화하면서 새로운 도메인에서 인과관계 모델을 잘 구축할 수 있는 통계 기반 모델을 이용한 인과관계 데이터 확장 방법과 도메인에 특화되지 않은 일반적인 언어자질과 인과관계에 특화된 자질을 심층 학습 기반 모델에 적용함으로써 성능 향상을 보인다.

  • PDF