• Title/Summary/Keyword: language performance

Search Result 1,556, Processing Time 0.023 seconds

The Effects of Three Stimulus Modes on receptive Language Performance and expressive Language Performance in Aphasics. (세 가지 자극 양식이 실어증자의 언어이해력과 언어표현력에 미치는 영향)

  • Lee, Moo-Kyoung;Yoo, Jae-Youn;Lee, Ok-bun;Jeong, Ok-Ran
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.263-272
    • /
    • 2000
  • The purpose of this study was to compare receptive language performance and expressive language performance in 13 patients with aphasia by using three stimulus presentation modes: Stimulus mode I (picture), Stimulus mode II (written word), Stimulus m (question using verbal explanation). The stimulus consisted of 10 words. They included 5 functional words and 5 non-functional words. The 13 subjects with aphasia were divided into 2 aphasic types: 5 Broca's aphasics and 8 anomie aphasics. The results were as follows: Firstly, the three stimulus modes didn't affect receptive language performance of aphasia subjects. Secondly; the three stimulus modes were effective on expressive language performance of aphasia subjects. Particularly, stimulus mode II (written words) was effective in naming the aphasia subjects. Thirdly, the functional words with high frequency were better than non-functional words with low frequency on expressive language performance, but not on receptive language performance of aphasia subjects. Finally, the interaction between three stimulus modes and the functional (nonfunctional) words affected expressive language performance only, but not receptive language performance. Particularly, presenting stimulus in written words which are functional words produced the best expressive language performance.

  • PDF

The effects of home literacy environment during the preschool period on first grader's language performance and school adjustment (취학 전후 가정문해환경이 초등학교 1학년 아동의 언어수행능력 및 학교적응에 미치는 영향)

  • Kim, Myoung Soon;Kim, Ji Yeon;Park, Young Lim;Lee, Young Shin;Shin, Bowon
    • Korean Journal of Human Ecology
    • /
    • v.23 no.6
    • /
    • pp.969-980
    • /
    • 2014
  • This paper reports on a study that examined the longitudinal and concurrent effects of the home literacy environment(HLE) on first grade language performance, and the effect of language performance on school adjustment. Study subjects were 469 first graders of 6 elementary schools. The parents' and teacher's reports were used to investigate the subjects' language performance, school adjustment, and the HLE before and after the elementary school entry. Findings from the study show that there is an association between the HLE during the preschool period and the HLE in first grade, and the HLE in first grade positively affects children's language performance. Also the children's language performance had a positive influence on their school adjustment. Therefore, it can be concluded that the HLE during the preschool period is a significant feature that lingers to affect children's language performance and school adjustment.

Recent R&D Trends for Pretrained Language Model (딥러닝 사전학습 언어모델 기술 동향)

  • Lim, J.H.;Kim, H.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

An assessment model for proficiency oriented English instruction in college English (능숙도 중심의 대학 교양영어 교육을 위한 평가방안 연구)

  • Lee, Jong-Bok
    • English Language & Literature Teaching
    • /
    • v.8 no.2
    • /
    • pp.177-196
    • /
    • 2003
  • The purpose of this study is to help teachers and program developers develop comprehensive and authentic assessment models with appropriate ways of using various kinds of assessment tools in college English instruction and assessment. Assessing by traditional discrete tests based on grammar and vocabulary cannot measure the authentic ability for language use in meaningful context in the real world. Currently, the trend in language assessment is changing to performance assessment. Increased use of performance assessments that involve language students in selecting and reflecting on their learning means that language teachers will have a wider range of evidence on which to judge whether students are becoming purposeful and are able to communicate as English users. Also, language programs focused on performance assessment are likely to instil in students authentic skills related to communication in the global world and enable them to evaluate what they learn from their English classes. In this study, the author investigated the theoretical background, the need of change, and several types of performance assessment.

  • PDF

Differential Effect for Neural Activation Processes according to the Proficiency Level of Code Switching: An ERP Study (이중언어환경에서의 언어간 부호전환 수준에 따른 차별적 신경활성화 과정: ERP연구)

  • Kim, Choong-Myung
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.3-10
    • /
    • 2010
  • The present study aims to investigate neural activations according to the level of code switching in English proficient bilinguals and to find the relationship between the performance of language switching and proficiency level using ERPs (event-related potentials). First, when comparing high-proficient (HP) with low-proficient (LP) bilingual performance in a native language environment, the activation level of N2 was observed to be higher in the HP group than in the LP group, but only under two conditions: 1) the language switching (between-language) condition known as indexing attention of code switching and 2) the inhibition of current language for L1. Another effect of N400 can be shown in both groups only in the language non-switching (within-language) condition. This effect suggests that both groups completed the semantic acceptability task well in their native language environment without the burden of language switching, irrespective of high or low performance. The latencies of N400 are only about 100ms earlier in the HP group than in the LP group. This difference can be interpreted as facilitation of the given task. These results suggest that HP showed the differential activation in inhibitory system for L1 in switching condition of L1-to-L2 to be contrary to inactivation of inhibitory system for the LP group. Despite the absence of an N400 effect at the given task in both groups, differential latencies between the peaks were attributed to the differences of efficiency in semantic processing.

  • PDF

Validity of Language-Based Algorithms Trained on Supervisor Feedback Language for Predicting Interpersonal Fairness in Performance Feedback

  • Jisoo Ock;Joyce S. Pang
    • Asia pacific journal of information systems
    • /
    • v.33 no.4
    • /
    • pp.1118-1134
    • /
    • 2023
  • Previous research has shown that employees tend to react more positively to corrective feedback from supervisors to the extent they perceive that they were treated with empathy, respect, and concern towards fair interpersonal treatment in receiving the feedback information. Then, to facilitate effective supervisory feedback and coaching, it would be useful for organizations to monitor the contents of feedback exchanges between supervisors and employees to make sure that supervisors are providing performance feedback using languages that are more likely to be perceived as interpersonally fair. Computer-aided text analysis holds potential as a useful tool that organizations can use to efficiently monitor the quality of the feedback messages that supervisors provide to their employees. In the current study, we applied computer-aided text analysis (using closed-vocabulary text analysis) and machine learning to examine the validity of language-based algorithms trained on supervisor language in performance feedback situations for predicting human ratings of feedback interpersonal fairness. Results showed that language-based algorithms predicted feedback interpersonal fairness with reasonable level of accuracy. Our findings provide supportive evidence for the promise of using employee language data for managing (and improving) performance management in organizations.

Design Strategies for Web-Based Self-Directed Cooperative Language Learning Communities (상호자율언어학습을 위한 웹기반 학습공동체의 설계전략 연구)

  • Park, Jung-Hwan;Lee, Kun-In;Zhao, Hai-Lan
    • English Language & Literature Teaching
    • /
    • v.10 no.1
    • /
    • pp.127-152
    • /
    • 2004
  • The purpose of this study is to elaborate design strategies for a Web-based self-directed cooperative distance language learning community. Research was done regarding the theoretical foundations for self-directed cooperative language learning and Web-based learning communities. The components of a Web-based community for self-directed cooperative language learning system are also investigated. As a result of this study, design strategies for Web-based communities are suggested. There are performance and supporting environments(synchronous/asynchronous) for self- directed cooperative language learning. There are also cultural experiences and communication factors in the performance field. Furthermore, matching communicators, finding and offering information, language learning content and other supporting agents are important in the supporting environment.

  • PDF

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.

Improved Statistical Language Model for Context-sensitive Spelling Error Candidates (문맥의존 철자오류 후보 생성을 위한 통계적 언어모형 개선)

  • Lee, Jung-Hun;Kim, Minho;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.371-381
    • /
    • 2017
  • The performance of the statistical context-sensitive spelling error correction depends on the quality and quantity of the data for statistical language model. In general, the size and quality of data in a statistical language model are proportional. However, as the amount of data increases, the processing speed becomes slower and storage space also takes up a lot. We suggest the improved statistical language model to solve this problem. And we propose an effective spelling error candidate generation method based on a new statistical language model. The proposed statistical model and the correction method based on it improve the performance of the spelling error correction and processing speed.

Evaluating the Impact of Training Conditions on the Performance of GPT-2-Small Based Korean-English Bilingual Models

  • Euhee Kim;Keonwoo Koo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.69-77
    • /
    • 2024
  • This study evaluates the performance of second language acquisition models learning Korean and English using the GPT-2-Small model, analyzing the impact of various training conditions on performance. Four training conditions were used: monolingual learning, sequential learning, sequential-interleaved learning, and sequential-EWC learning. The model was trained using datasets from the National Institute of Korean Language and English from BabyLM Challenge, with performance measured through PPL and BLiMP metrics. Results showed that monolingual learning had the best performance with a PPL of 16.2 and BLiMP accuracy of 73.7%. In contrast, sequential-EWC learning had the highest PPL of 41.9 and the lowest BLiMP accuracy of 66.3%(p < 0.05). Monolingual learning proved most effective for optimizing model performance. The EWC regularization in sequential-EWC learning degraded performance by limiting weight updates, hindering new language learning. This research improves understanding of language modeling and contributes to cognitive similarity in AI language learning.