• 제목/요약/키워드: Text corpus

검색결과 244건 처리시간 0.019초

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

K 패션에 대한 글로벌 미디어 보도 경향 분석 -다이내믹 토픽 모델링(Dynamic Topic Modeling)의 적용- (Analysis of Global Media Reporting Trends for K-fashion -Applying Dynamic Topic Modeling-)

  • 안효선;김지영
    • 한국의류학회지
    • /
    • 제46권6호
    • /
    • pp.1004-1022
    • /
    • 2022
  • This study seeks to investigate K-fashion's external image by examining the trends in global media reporting. It applies Dynamic Topic Modeling (DTM), which captures the evolution of topics in a sequentially organized corpus of documents, and consists of text preprocessing, the determination of the number of topics, and a timeseries analysis of the probability distribution of words within topics. The data set comprised 551 online media articles on 'Korean fashion' or 'K-fashion' published on Google News between 2010 and 2021. The analysis identifies seven topics: 'brand look and style,' 'lifestyle,' 'traditional style,' 'Seoul Fashion Week (SFW) event,' 'model size,' 'K-pop,' and 'fashion market,' as well as annual topic proportion trends. It also explores annual word changes within the topic and indicates increasing and decreasing word patterns. In most topics, the probability distribution of the word 'brand' is confirmed to be on the increase, while 'digital,' 'platform,' and 'virtual' have been newly created in the 'SFW event' topic. Moreover, this study confirms the transition of each K-fashion topic over the past 12 years, along with various factors related to Hallyu content, traditional culture, government support, and digital technology innovation.

분야연상어의 수집과 추출 알고리즘 (Collection and Extraction Algorithm of Field-Associated Terms)

  • 이상곤;이완권
    • 정보처리학회논문지B
    • /
    • 제10B권3호
    • /
    • pp.347-358
    • /
    • 2003
  • 인간은 문서전체를 읽지 않고 대표적인 단어를 보는 것만으로 정치나 스포츠 등의 분야를 정확히 인지할 수 있다. 문서전체를 대상으로 하지 않고 부분텍스트에서 출현하는 소수의 단어정보에서 문서의 분야를 정확히 결정하기 위해 분야연상어의 구축은 중요한 연구과제이다. 인간이 미리 분야체계를 정의하고, 각 분야에 해당하는 문서를 인터넷이나 서적을 통해 수집한다. 본 논문은 수집문서의 분야를 정확히 지시하는 분야연상어를 수집하는 방법을 제안한다. 문서의 분야결정 시점을 고려하여 분야연상어의 수준과 안정성 랭크에 대하여 논의한다. 학습데이터에서 분야연상어 후보의 각 수준을 자동으로 결정하고, 컴퓨터가 제시하는 분야연상어의 수준, 안정성 랭크, 집중률, 빈도정보를 이용하여 단일 분야연상어를 수집하는 방법을 제안한다.

Assessment of performance of machine learning based similarities calculated for different English translations of Holy Quran

  • Al Ghamdi, Norah Mohammad;Khan, Muhammad Badruddin
    • International Journal of Computer Science & Network Security
    • /
    • 제22권4호
    • /
    • pp.111-118
    • /
    • 2022
  • This research article presents the work that is related to the application of different machine learning based similarity techniques on religious text for identifying similarities and differences among its various translations. The dataset includes 10 different English translations of verses (Arabic: Ayah) of two Surahs (chapters) namely, Al-Humazah and An-Nasr. The quantitative similarity values for different translations for the same verse were calculated by using the cosine similarity and semantic similarity. The corpus went through two series of experiments: before pre-processing and after pre-processing. In order to determine the performance of machine learning based similarities, human annotated similarities between translations of two Surahs (chapters) namely Al-Humazah and An-Nasr were recorded to construct the ground truth. The average difference between the human annotated similarity and the cosine similarity for Surah (chapter) Al-Humazah was found to be 1.38 per verse (ayah) per pair of translation. After pre-processing, the average difference increased to 2.24. Moreover, the average difference between human annotated similarity and semantic similarity for Surah (chapter) Al-Humazah was found to be 0.09 per verse (Ayah) per pair of translation. After pre-processing, it increased to 0.78. For the Surah (chapter) An-Nasr, before preprocessing, the average difference between human annotated similarity and cosine similarity was found to be 1.93 per verse (Ayah), per pair of translation. And. After pre-processing, the average difference further increased to 2.47. The average difference between the human annotated similarity and the semantic similarity for Surah An-Nasr before preprocessing was found to be 0.93 and after pre-processing, it was reduced to 0.87 per verse (ayah) per pair of translation. The results showed that as expected, the semantic similarity was proven to be better measurement indicator for calculation of the word meaning.

자동 음성 분할을 위한 음향 모델링 및 에너지 기반 후처리 (Acoustic Modeling and Energy-Based Postprocessing for Automatic Speech Segmentation)

  • 박혜영;김형순
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.137-150
    • /
    • 2002
  • Speech segmentation at phoneme level is important for corpus-based text-to-speech synthesis. In this paper, we examine acoustic modeling methods to improve the performance of automatic speech segmentation system based on Hidden Markov Model (HMM). We compare monophone and triphone models, and evaluate several model training approaches. In addition, we employ an energy-based postprocessing scheme to make correction of frequent boundary location errors between silence and speech sounds. Experimental results show that our system provides 71.3% and 84.2% correct boundary locations given tolerance of 10 ms and 20 ms, respectively.

  • PDF

음절단위 bigram정보를 이용한 한국어 단어인식모델 (A Statistical Model for Korean Text Segmentation Using Syllable-Level Bigrams)

  • 신중호;박혁로
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 1997년도 제9회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.255-260
    • /
    • 1997
  • 일반적으로 한국어는 띄어쓰기 단위인 어절이 형태소 분석의 입력 단위로 쓰이고 있다. 그러나 실제 영역(real domain)에서 사용되는 텍스트에서는 띄어쓰기 오류와 같은 비문법적인 형태도 빈번히 쓰이고 있다. 따라서 형태소 분석 과정에 선행하여 적합한 형태소 분석의 단위를 인식하는 과정이 이루어져야 한다. 본 연구에서는 한국어의 음절 특성을 이용한 형태소분석을 위한 어절 인식 방법을 제안한다. 제안하는 방법은 사전에 기반하지 않고 원형코퍼스(raw corpus)로부터의 필요한 음절 정보 및 어휘정보를 추출하는 방법을 취하므로 오류가 포함된 문장에 대하여 견고한 분석이 가능하고 많은 시간과 노력이 요구되는 사전구축 및 관리 작업을 필요로 하지 않는다는 장점이 있다. 한국어 어절 인식을 위하여 본 논문에서는 세가지 확률 모텔과 동적 프로그래밍에 기반한 인식 알고리즘을 제안한다. 제안하는 모델들을 띄어쓰기 오류문제와 한국어 복합명사 분석 문제에 적용하여 실험한 결과 82-85%정도의 인식 정확도를 보였다.

  • PDF

HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구 (A Study on the Voice Conversion with HMM-based Korean Speech Synthesis)

  • 김일환;배건성
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

코퍼스 기반 음성합성기를 위한 합성단위 경계 스펙트럼 평탄화 알고리즘 (A Spectral Smoothing Algorithm for Unit Concatenating Speech Synthesis)

  • 김상진;장경애;한민수
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.225-235
    • /
    • 2005
  • Speech unit concatenation with a large database is presently the most popular method for speech synthesis. In this approach, the mismatches at the unit boundaries are unavoidable and become one of the reasons for quality degradation. This paper proposes an algorithm to reduce undesired discontinuities between the subsequent units. Optimal matching points are calculated in two steps. Firstly, the fullback-Leibler distance measurement is utilized for the spectral matching, then the unit sliding and the overlap windowing are used for the waveform matching. The proposed algorithm is implemented for the corpus-based unit concatenating Korean text-to-speech system that has an automatically labeled database. Experimental results show that our algorithm is fairly better than the raw concatenation or the overlap smoothing method.

  • PDF

A New Pruning Method for Synthesis Database Reduction Using Weighted Vector Quantization

  • Kim, Sanghun;Lee, Youngjik;Keikichi Hirose
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권4E호
    • /
    • pp.31-38
    • /
    • 2001
  • A large-scale synthesis database for a unit selection based synthesis method usually retains redundant synthesis unit instances, which are useless to the synthetic speech quality. In this paper, to eliminate those instances from the synthesis database, we proposed a new pruning method called weighted vector quantization (WVQ). The WVQ reflects relative importance of each synthesis unit instance when clustering the similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through the objective and subjective evaluations of the synthetic speech quality: one to simply limit maximum number of instance, and the other based on normal VQ-based clustering. The proposed method showed the best performance under 50% reduction rates. Over 50% of reduction rates, the synthetic speech quality is not seriously but perceptibly degraded. Using the proposed method, the synthesis database can be efficiently reduced without serious degradation of the synthetic speech quality.

  • PDF

한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리 (Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech)

  • 홍문기;신지영;강선미
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF