• 제목/요약/키워드: language training

Search Result 685, Processing Time 0.028 seconds

The Effects of Tongue Pressure Strength and Accuracy Training on Tongue Strength and Speech Function of Chronic Stroke Patients (혀 저항정확도훈련이 만성 뇌졸중 환자의 혀 근력과 구어기능에 미치는 영향)

  • Kim, Bo-Jung;Ma, Sung-Ryoung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.156-166
    • /
    • 2017
  • The purpose of this study was to evaluate the effect of the tongue's maximum resistance training program on the accuracy of the tongue training program using the Iowa Oral Performance Instrument (IOPI) and to compare the effects of tongue muscle strength and spoken language function on objective function. The experiment was diagnosed with stroke hemiplegia divided into tongue pressure strength and accuracy training therapy group and the oromotor exercise therapy group Anterior Tongue Pressure(ATP), Posterior Tongue Pressure (PTP), and Posterior Tongue Pressure (PTP) were measured before and after the intervention to evaluate changes in tongue strength and verbal ability. Maximum Phonation Time (MPT). The results of this study are as follows. There was no significant difference in tongue strength and verbal function between training group and oral facial exercise group. There was no significant difference between tongue strength training and oral facial exercise group. Therefore, it was shown that the tongue pressure strength and accuracy training therapy group was not effective to improve tongue muscle strength and spoken language ability than the oromotor exercise therapy group.

An Analysis of the Achievement Test in the King Sejong Institute: Current Status of Applicants and their Performance (세종학당 성취도 평가 응시 현황 및 결과 분석 연구)

  • Kim, Jihye;Lee, Sunyoung;Park, Jinwook;Noh, Jungeun
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.55-82
    • /
    • 2018
  • The purpose of this study is to analyze Language Achievement tests of King Sejong Institute which have been carried out from 2014 to 2017. Language Achievement tests of King Sejong Institute has been developed since 2014, the test is operated in 99 institutes of 46 countries now (As of first half of 2017) When this study analyzes the result of evaluation for 4 years, it was found that the number of nations enforcing the evaluation, institutes and examinees has continued its growth. In the early stage of evaluation, the examinees from Asian regions take up majority but the recent proportion of European region is getting bigger gradually. In addition, only beginner level evaluation was carried out in the early stage but recently its range is expanded to the intermediate level. This Language Achievement tests of King Sejong Institute can be utilized very valuable data which can diagnose the present and future of oversea Korean language education. In order to elevate public confidence as Korean language achievement test, this study suggested, first, increase the feedback effect of evaluation, second, establish the learning history information of examinees along with the test scores, third, conduct the training for evaluator in order to increase the validity and reliability of the evaluation, fourth, seek to utilize the results of the achievement test.

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

Implementation of Korean TTS System based on Natural Language Processing (자연어 처리 기반 한국어 TTS 시스템 구현)

  • Kim Byeongchang;Lee Gary Geunbae
    • MALSORI
    • /
    • no.46
    • /
    • pp.51-64
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method for Korean using a hybrid method with a phonetic pattern dictionary and CCV (consonant vowel) LTS (letter to sound) rules, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method. The probabilistic method atone usually suffers from performance degradation due to inherent data sparseness problems. So we adopted tree-based error correction to overcome these training data limitations.

  • PDF

Machine scoring method for speech recognizer detection mispronunciation of foreign language (외국어 발화오류 검출 음성인식기를 위한 스코어링 기법)

  • Kang, Hyo-Won;Bae, Min-Young;Lee, Jae-Kang;Kwon, Chul-Hong
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.239-242
    • /
    • 2004
  • An automatic pronunciation correction system provides users with correction guidelines for each pronunciation error. For this purpose, we propose a speech recognition system which automatically classifies pronunciation errors when Koreans speak a foreign language. In this paper, we also propose machine scoring methods for automatic assessment of pronunciation quality by the speech recognizer. Scores obtained from an expert human listener are used as the reference to evaluate the different machine scores and to provide targets when training some of algorithms. We use a log-likelihood score and a normalized log-likelihood score as machine scoring methods. Experimental results show that the normalized log-likelihood score had higher correlation with human scores than that obtained using the log-likelihood score.

  • PDF

Environment Adaptation by Discriminative Noise Adaptive Training Methods (잡음적응 변별학습 방식을 이용한 환경적응)

  • Kang, Byung-Ok;Jung, Ho-Young;Lee, Yun-Keun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.397-398
    • /
    • 2007
  • 본 논문에서는 환경변화에 대해 강인하게 동작하는 음성인식 시스템을 위해 잡음적응 훈련과 변별학습 방식을 결합한 형태의 환경적응 방식을 제안한다. 다중환경 훈련과 잡음제거방식을 결합한 형태인 잡음적응 훈련 방식은 음성인식을 위한 MCE (Minimum Classification Error)의 목적과는 거리가 있고, 음성인식 시스템이 사용되는 모든 환경을 반영하는 것은 현실적으로 어렵다는 점에서 한계가 있다. 이에 잡음적응 훈련방식으로 훈련된 기본 음향모델을 목적환경에서 수집한 소량의 데이터를 이용한 변별학습을 통해 환경적응 모델로 변환함으로써 이러한 단점을 보완할 수 있는 잡음 적응 변별학습을 이용한 훈련방식을 제안한다.

  • PDF

Comparing the efficiency of college and university employment using DEA analysis program

  • Jeong, Seong-Bae;Lee, Ji-woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.12
    • /
    • pp.203-209
    • /
    • 2018
  • This study analyzed the employment efficiency of Korean colleges and universities using DEA(2015 GOMS), which was surveyed by the Ministry of Employment and Labor and the Korea Employment Information Service. The input variables were employment program participants, language educators, family economic applicants, and employment targets. As a result, the college was more efficient than the college and the college was relatively ineffective. The contribution of input and output contributed the highest efficiency with 99.9% of the participants in the employment program, and the possibility of improving the efficiency of the language educator was the highest at 70.04. Based on the results of the above research, it suggests the necessity of activation of employment programs and activation of language training in each university. Future studies will need to study the efficiency of universities nationwide.

Post-Training with Hierarchical Masked Language Modeling (계층적 마스크 모델링을 이용한 언어 모델의 사후 학습)

  • Hyun-Kyu Jeon;Hyein Jung;Seoyeon Park;Bong-Su Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.588-591
    • /
    • 2022
  • 최근 자연어 이해 및 생성에 있어서 사전학습 기반의 언어 모델이 널리 사용되고 있다. BERT, roBERTa 등의 모델이 있으며, 마스크 언어 모델링을 주요 과제로 하여 사전 학습을 한다. 하지만 MLM은 문법적인 정보를 활용하지 못하는 단점이 있다. 단 순히 무작위로 마스크를 씌우고 맞추기 때문이다. 따라서 본 연구에서는 입력 문장의 문법적 정보를 활용하는 방법을 소개하고, 이를 기반으로 사후 학습을 하여 그 효과를 확인해 본다. 공개된 사전학습 모델과 사후학습 모델을 한국어를 위한 벤치마크 데이터셋 KLUE에 대하여 조정학습하고 그 결과를 살펴본다.

  • PDF

Korean Text Summarization using MASS with Copying Mechanism (MASS와 복사 메커니즘을 이용한 한국어 문서 요약)

  • Jung, Young-Jun;Lee, Chang-Ki;Go, Woo-Young;Yoon, Han-Jun
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.157-161
    • /
    • 2020
  • 문서 요약(text summarization)은 주어진 문서로부터 중요하고 핵심적인 정보를 포함하는 요약문을 만들어 내는 작업으로, 기계 번역 작업에서 주로 사용되는 Sequence-to-Sequence 모델을 사용한 end-to-end 방식의 생성(abstractive) 요약 모델 연구가 활발히 진행되고 있다. 최근에는 BERT와 MASS 같은 대용량 단일 언어 데이터 기반 사전학습(pre-training) 모델을 이용하여 미세조정(fine-tuning)하는 전이 학습(transfer learning) 방법이 자연어 처리 분야에서 주로 연구되고 있다. 본 논문에서는 MASS 모델에 복사 메커니즘(copying mechanism) 방법을 적용하고, 한국어 언어 생성(language generation)을 위한 사전학습을 수행한 후, 이를 한국어 문서 요약에 적용하였다. 실험 결과, MASS 모델에 복사 메커니즘 방법을 적용한 한국어 문서 요약 모델이 기존 모델들보다 높은 성능을 보였다.

  • PDF

Building and quality assessing conversation-based training data for artificial intelligence tutoring systems (인공지능 튜터링 시스템을 위한 대화 기반 교육 데이터 구축 및 품질 평가)

  • Ye-Lim Jeon;Jinxia Huang;Sung-Kwon Choi;Minsoo Cho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.430-431
    • /
    • 2023
  • 교육 분야에서는 각 학생의 특성과 요구에 부응하는 개인화 교육의 중요성이 증가하고 있다. 이에 따라 인공지능 기반의 튜터링 시스템, 특히 대화 기반의 튜터링이 주목받고 있다. 본 연구는 GPT-3.5-turbo 를 사용하여 데이터를 생성하는 과정에서 프롬프트 설계의 중요성과 인간의 감수 과정의 필요성을 확인했다. 또한, 자동 평가 방법을 제안하여 데이터의 품질과 유용성을 평가하였다.