• Title/Summary/Keyword: language training

Search Result 689, Processing Time 0.022 seconds

Design of a Markup Language for Augmented Reality Systems (증강현실 시스템을 위한 시나리오 마크업 언어 설계)

  • Choi, Jongmyung;Lee, Youngho;Kim, Sun Kyung;Moon, Ji Hyun
    • Journal of Internet of Things and Convergence
    • /
    • v.7 no.1
    • /
    • pp.21-25
    • /
    • 2021
  • Augmented reality systems are widely used in the fields of entertainment, shopping, education, and training, and the augmented reality technology is gradually increasing in importance. When augmented reality technology is used for education or training, it must be possible to represent different virtual objects depending on the work stage even for the same marker. Also, since the training content varies depending on the situation, it is necessary to describe it using a training scenario. In order to solve this problem, we propose a scenario markup language for an augmented reality system that can create training content based on a scenario and connect it with an augmented reality system. The scenario markup language for augmented reality provides functions such as a method for connecting a scene, a marker and a virtual object, a method for grasping the state of equipment or sensor value, and a method for moving a scene according to conditions. The augmented reality scenario markup language can flexibly increase the usefulness and expandability of the augmented reality system usage method and content usage.

MCE Training Algorithm for a Speech Recognizer Detecting Mispronunciation of a Foreign Language (외국어 발음오류 검출 음성인식기를 위한 MCE 학습 알고리즘)

  • Bae, Min-Young;Chung, Yong-Joo;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.43-52
    • /
    • 2004
  • Model parameters in HMM based speech recognition systems are normally estimated using Maximum Likelihood Estimation(MLE). The MLE method is based mainly on the principle of statistical data fitting in terms of increasing the HMM likelihood. The optimality of this training criterion is conditioned on the availability of infinite amount of training data and the correct choice of model. However, in practice, neither of these conditions is satisfied. In this paper, we propose a training algorithm, MCE(Minimum Classification Error), to improve the performance of a speech recognizer detecting mispronunciation of a foreign language. During the conventional MLE(Maximum Likelihood Estimation) training, the model parameters are adjusted to increase the likelihood of the word strings corresponding to the training utterances without taking account of the probability of other possible word strings. In contrast to MLE, the MCE training scheme takes account of possible competing word hypotheses and tries to reduce the probability of incorrect hypotheses. The discriminant training method using MCE shows better recognition results than the MLE method does.

  • PDF

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

A Computational Model of Language Learning Driven by Training Inputs

  • Lee, Eun-Seok;Lee, Ji-Hoon;Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2010.05a
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

A study on Korean language teachers' beliefs and practices on written feedback (서면 피드백에 대한 현장 한국어 교사의 신념과 실제에 관한 연구)

  • Shim, Yunjin;Ahn, Jaerin
    • Journal of Korean language education
    • /
    • v.28 no.1
    • /
    • pp.141-171
    • /
    • 2017
  • This study investigates Korean language teachers' perception/beliefs and practices in written feedback. Two types of data were collected: (1) teachers' feedback on three compositions by elementary-level learners, and (2) a survey questionnaire. The result showed that teachers perceived written feedback to be important even though they had not enough opportunities to receive appropriate training. Lack of training brought about limited feedback in terms of both quantity and quality, and inconsistency between their beliefs and practice. This study closes with the needs for teacher training and further studies on teachers' feedback practices.

Developing and Exploring the Possibility of Serious Language Training Game for Students with Intellectual Disabilities (지적장애 학생을 위한 기능성 언어게임의 개발 및 적용 가능성 탐색)

  • Lee, Tae-Su;Kim, Yeon-Pyo
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.287-298
    • /
    • 2017
  • The purpose of this study was to develop and to explore the possibility of the serious language training game for students with intellectual disabilities. To do this, words of language training program were extracted from the National Common Basic Curriculum of Special School and the contents design of language training program were developed based on the evaluation of significance and difficulty by special teachers, special education professors, and special education experts. To explore the possibility of applying the serious language training game, a usability evaluation was conducted with 45 special school teachers and 31 students with intellectual disabilities. The results showed that the serious language game had a high level of usability as the special school teachers received 4.25 and the students with intellectual disabilities received 4.33 out of 5. But there is no difference of usability evaluation on the level of school and the arrangement of school. As a result of this study, we can draw a conclusion that there is a high possibility of applying the serious language game to language education for students with intellectual disabilities.

ETRI small-sized dialog style TTS system (ETRI 소용량 대화체 음성합성시스템)

  • Kim, Jong-Jin;Kim, Jeong-Se;Kim, Sang-Hun;Park, Jun;Lee, Yun-Keun;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.217-220
    • /
    • 2007
  • This study outlines a small-sized dialog style ETRI Korean TTS system which applies a HMM based speech synthesis techniques. In order to build the VoiceFont, dialog-style 500 sentences were used in training HMM. And the context information about phonemes, syllables, words, phrases and sentence were extracted fully automatically to build context-dependent HMM. In training the acoustic model, acoustic features such as Mel-cepstrums, logF0 and its delta, delta-delta were used. The size of the VoiceFont which was built through the training is 0.93Mb. The developed HMM-based TTS system were installed on the ARM720T processor which operates 60MHz clocks/second. To reduce computation time, the MLSA inverse filtering module is implemented with Assembly language. The speed of the fully implemented system is the 1.73 times faster than real time.

  • PDF

Real-time Sign Language Recognition Using an Armband with EMG and IMU Sensors (근전도와 관성센서가 내장된 암밴드를 이용한 실시간 수화 인식)

  • Kim, Seongjung;Lee, Hansoo;Kim, Jongman;Ahn, Soonjae;Kim, Youngho
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.4
    • /
    • pp.329-336
    • /
    • 2016
  • Deaf people using sign language are experiencing social inequalities and financial losses due to communication restrictions. In this paper, real-time pattern recognition algorithm was applied to distinguish American Sign Language using an armband sensor(8-channel EMG sensors and one IMU) to enable communication between the deaf and the hearing people. The validation test was carried out with 11 people. Learning pattern classifier was established by gradually increasing the number of training database. Results showed that the recognition accuracy was over 97% with 20 training samples and over 99% with 30 training samples. The present study shows that sign language recognition using armband sensor is more convenient and well-performed.

A Protein-Protein Interaction Extraction Approach Based on Large Pre-trained Language Model and Adversarial Training

  • Tang, Zhan;Guo, Xuchao;Bai, Zhao;Diao, Lei;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.771-791
    • /
    • 2022
  • Protein-protein interaction (PPI) extraction from original text is important for revealing the molecular mechanism of biological processes. With the rapid growth of biomedical literature, manually extracting PPI has become more time-consuming and laborious. Therefore, the automatic PPI extraction from the raw literature through natural language processing technology has attracted the attention of the majority of researchers. We propose a PPI extraction model based on the large pre-trained language model and adversarial training. It enhances the learning of semantic and syntactic features using BioBERT pre-trained weights, which are built on large-scale domain corpora, and adversarial perturbations are applied to the embedding layer to improve the robustness of the model. Experimental results showed that the proposed model achieved the highest F1 scores (83.93% and 90.31%) on two corpora with large sample sizes, namely, AIMed and BioInfer, respectively, compared with the previous method. It also achieved comparable performance on three corpora with small sample sizes, namely, HPRD50, IEPA, and LLL.

The effect of computer based cognitive rehabilitation program on the improvement of generative naming in the elderly with mild dementia: preliminary study (한국형 전산화 인지재활프로그램이 초기 치매노인의 생성 이름대기 수행에 미치는 효과에 관한 예비연구)

  • Byeon, Haewon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.167-172
    • /
    • 2019
  • The purpose of this study was to investigate the effect of computer based cognitive rehabilitation program on the generative naming. Twenty - one patients were assigned to the CoTras program and eight were treated with traditional face - to - face language rehabilitation such as paper and table activities. The experimental group and the control group performed sequential language recall memory training, association memory recall training, language categorization memory training, and language integrated memory training for 12 weeks. The Welch's robust ANCOVA showed significant differences in mean fluency and MMSE-K changes (p<0.05). On the other hand, phonemic fluency increased significantly after 12 weeks of treatment compared to baseline in both experimental and control groups, but there was no statistically significant difference between treatment groups. The results of this study suggest that the computer based cognitive rehabilitation program may be more effective in improving the semantic fluency than the conventional cognitive-linguistic rehabilitation.