• Title/Summary/Keyword: cross-language transfer

Search Result 22, Processing Time 0.022 seconds

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

Study on Zero-shot based Quality Estimation (Zero-Shot 기반 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Seo, Jaehyung;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.35-43
    • /
    • 2021
  • Recently, there has been a growing interest in zero-shot cross-lingual transfer, which leverages cross-lingual language models (CLLMs) to perform downstream tasks that are not trained in a specific language. In this paper, we point out the limitations of the data-centric aspect of quality estimation (QE), and perform zero-shot cross-lingual transfer even in environments where it is difficult to construct QE data. Few studies have dealt with zero-shots in QE, and after fine-tuning the English-German QE dataset, we perform zero-shot transfer leveraging CLLMs. We conduct comparative analysis between various CLLMs. We also perform zero-shot transfer on language pairs with different sized resources and analyze results based on the linguistic characteristics of each language. Experimental results showed the highest performance in multilingual BART and multillingual BERT, and we induced QE to be performed even when QE learning for a specific language pair was not performed at all.

An Enhancement of Japanese Acoustic Model using Korean Speech Database (한국어 음성데이터를 이용한 일본어 음향모델 성능 개선)

  • Lee, Minkyu;Kim, Sanghun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.5
    • /
    • pp.438-445
    • /
    • 2013
  • In this paper, we propose an enhancement of Japanese acoustic model which is trained with Korean speech database by using several combination strategies. We describe the strategies for training more than two language combination, which are Cross-Language Transfer, Cross-Language Adaptation, and Data Pooling Approach. We simulated those strategies and found a proper method for our current Japanese database. Existing combination strategies are generally verified for under-resourced Language environments, but when the speech database is not fully under-resourced, those strategies have been confirmed inappropriate. We made tyied-list with only object-language on Data Pooling Approach training process. As the result, we found the ERR of the acoustic model to be 12.8 %.

Llama2 Cross-lingual Korean with instruction and translation datasets (지시문 및 번역 데이터셋을 활용한 Llama2 Cross-lingual 한국어 확장)

  • Gyu-sik Jang;;Seung-Hoon Na;Joon-Ho Lim;Tae-Hyeong Kim;Hwi-Jung Ryu;Du-Seong Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.627-632
    • /
    • 2023
  • 대규모 언어 모델은 높은 연산 능력과 방대한 양의 데이터를 기반으로 탁월한 성능을 보이며 자연어처리 분야의 주목을 받고있다. 이러한 모델들은 다양한 언어와 도메인의 텍스트를 처리하는 능력을 갖추게 되었지만, 전체 학습 데이터 중에서 한국어 데이터의 비중은 여전히 미미하다. 결과적으로 이는 대규모 언어 모델이 영어와 같은 주요 언어들에 비해 한국어에 대한 이해와 처리 능력이 상대적으로 부족함을 의미한다. 본 논문은 이러한 문제점을 중심으로, 대규모 언어 모델의 한국어 처리 능력을 향상시키는 방법을 제안한다. 특히, Cross-lingual transfer learning 기법을 활용하여 모델이 다양한 언어에 대한 지식을 한국어로 전이시켜 성능을 향상시키는 방안을 탐구하였다. 이를 통해 모델은 기존의 다양한 언어에 대한 손실을 최소화 하면서도 한국어에 대한 처리 능력을 상당히 향상시켰다. 실험 결과, 해당 기법을 적용한 모델은 기존 모델 대비 nsmc데이터에서 2배 이상의 성능 향상을 보이며, 특히 복잡한 한국어 구조와 문맥 이해에서 큰 발전을 보였다. 이러한 연구는 대규모 언어 모델을 활용한 한국어 적용 향상에 기여할 것으로 기대 된다.

  • PDF

Research on Features for Effective Cross-Lingual Transfer in Korean (효과적인 한국어 교차언어 전송을 위한 특성 연구)

  • Taejun Yun;Taeuk Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.119-124
    • /
    • 2023
  • 자원이 풍부한 언어를 사용하여 훈련된 모델을 만들고 해당 모델을 사용해 자원이 부족한 언어에 대해 전이 학습하는 방법인 교차언어 전송(Cross-Lingual Transfer)은 다국어 모델을 사용하여 특정한 언어에 맞는 모델을 만들 때 사용되는 일반적이고 효율적인 방법이다. 교차언어 전송의 성능은 서비스하는 언어와 전송 모델을 만들기 위한 훈련 데이터 언어에 따라 성능이 매우 다르므로 어떤 언어를 사용하여 학습할지 결정하는 단계는 효율적인 언어 서비스를 위해 매우 중요하다. 본 연구에서는 교차언어 전송을 위한 원천언어를 찾을 수 있는 특성이 무엇인지 회귀분석을 통해 탐구한다. 또한 교차언어전송에 용이한 원천 학습 언어를 찾는 기존의 방법론들 간의 비교를 통해 더 나은 방법을 도출해내고 한국어의 경우에 일반적으로 더 나은 원천 학습 언어를 찾을 수 있는 방법론을 도출한다.

  • PDF

Culture in language: comparing cultures through words in South Africa

  • Montevecchi, Michela
    • Cross-Cultural Studies
    • /
    • v.24
    • /
    • pp.120-131
    • /
    • 2011
  • South Africa is a multiracial country where different cultures and languages coexist. Culture can be conveyed through language. Language conditioning is also social conditioning, and through words we make sense of our own and others' experience. In this paper I investigate the meaning of two culturally significant words: (English) peace and (African) ubuntu. Data findings will show how L2 speakers of English, when asked to define peace, promptly operate a process of transfer of the meaning from their mother-tongue Xhosa equivalent - uxolo - to its English equivalent. Ubuntu, an African word which encompasses traditional African values, has no counterpart in English. I will also argue how, in the ongoing process of globalisation, English is playing a predominant role in promoting cultural homogenization.

Cross-language Transfer of Phonological Awareness and Its Relations with Reading and Writing in Korean and English (음운인식의 언어 간 전이와 한글 및 영어의 읽기 쓰기와의 관계)

  • Kim, Sangmi;Cho, Jeung-Ryeul;Kim, Ji-Youn
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.2
    • /
    • pp.125-146
    • /
    • 2015
  • This study investigated the contribution of Korean phonological awareness to English phonological awareness and the relations of phonological awareness with reading and writing in Korean Hangul and English among Korean 5th graders. With age and vocabulary knowledge statistically controlled, Korean phonological awareness was transferred to English phonological awareness. Specifically, syllable and phoneme awareness in Korean transferred to syllable awareness in English, and Korean phoneme awareness transferred to English phoneme awareness. In addition, English phoneme awareness independently explained significant variance of reading and writing in Korean and English after controlling for age and vocabulary. Syllable awareness in Korean and English explained Hangul reading and writing, respectively. The results suggest cross-language transfer of phonological awareness that is a metalinguistic skill. Phoneme awareness is important in reading and writing in English whereas both of syllable and phoneme awareness are important in literacy of Korean.

Korean Sentence Comprehension of Korean/English Bilingual Children (한국어/영어 이중언어사용 아동의 한국어 문장이해: 조사, 의미, 어순 단서의 활용을 중심으로)

  • Hwang, Min-A
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.241-254
    • /
    • 2003
  • The purpose of the present study was to investigate the sentence comprehension strategies used by Korean/English bilingual children when they listened to sentences of their first language, i.e., Korean. The framework of competition model was employed to analyze the influence of the second language, i.e., English, during comprehension of Korean sentences. The participants included 10 bilingual children (ages 7;4-13;0) and 20 Korean-speaking monolingual children(ages 5;7-6;10) with similar levels of development in Korean language as bilingual children. In an act-out procedure, the children were asked to determine the agent in sentences composed of two nouns and a verb with varying conditions of three cues (case-marker, animacy, and word-order). The results revealed that both groups of children used the case marker cues as the strongest cue among the three. The bilingual children relied on case-marker cues even more than the monolingual children. However, the bilingual children used animacy cues significantly less than the monolingual children. There were no significant differences between the groups in the use of word-order cues. The bilingual children appeared less effective in utilizing animacy cues in Korean sentence comprehension due to the backward transfer from English where the cue strength of animacy is very weak. The influence of the second language on the development of the first language in bilingual children was discussed.

  • PDF

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Static Aanlysis of Curved box Girder Bridge with Variable Cross Section by Transfer Matrix Method (전달행렬법에 의한 변단면 곡선 상자형 거더교의 정적해석)

  • Kim, Yong-Hee;Lee, Yoon-Young
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.7 no.4
    • /
    • pp.109-120
    • /
    • 2003
  • The state-of-art of curved box girder bridge with cross section design has advanced in various area. In these days, several analytical techniques for behaviors of curved box girder bridges cross section are available to engineers. The transfer matrix method is extensively used for the structural analysis because its merit in the theoretical background and applicability. The technique is attractive for implementation on a numerical solution by means of a computer program coded in Fortran language with a few elements. To demonstrate this fact, it gives good results which compare well with finite element method. Therefore, this paper proposed the static analysis method of curved box bridge with cross section by transfer matrix method based on pure-torsional theory and the optimal span ratio/variable cross section ratio of 3 span continuous curved box girder bridge.