• Title/Summary/Keyword: corpora

Search Result 249, Processing Time 0.021 seconds

Trends in Deep-neural-network-based Dialogue Systems (심층 신경망 기반 대화처리 기술 동향)

  • Kwon, O.W.;Hong, T.G.;Huang, J.X.;Roh, Y.H.;Choi, S.K.;Kim, H.Y.;Kim, Y.K.;Lee, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.55-64
    • /
    • 2019
  • In this study, we introduce trends in neural-network-based deep learning research applied to dialogue systems. Recently, end-to-end trainable goal-oriented dialogue systems using long short-term memory, sequence-to-sequence models, among others, have been studied to overcome the difficulties of domain adaptation and error recognition and recovery in traditional pipeline goal-oriented dialogue systems. In addition, some research has been conducted on applying reinforcement learning to end-to-end trainable goal-oriented dialogue systems to learn dialogue strategies that do not appear in training corpora. Recent neural network models for end-to-end trainable chit-chat systems have been improved using dialogue context as well as personal and topic information to produce a more natural human conversation. Unlike previous studies that have applied different approaches to goal-oriented dialogue systems and chit-chat systems respectively, recent studies have attempted to apply end-to-end trainable approaches based on deep neural networks in common to them. Acquiring dialogue corpora for training is now necessary. Therefore, future research will focus on easily and cheaply acquiring dialogue corpora and training with small annotated dialogue corpora and/or large raw dialogues.

Development of Differential Diagnosis and Treatment Method of Reproductive Disorders Using Ultrasonography in Cows III. Differential Diagnosis between Developing and Regressing Corpus Luteum (초음파검사에 의한 소의 번식장애 감별진단 및 치료법 개발 III. 발육황체와 퇴행황체의 감별)

  • 손창호;강병규;최한선;임원호;강현구;오기석;신종봉;서국현
    • Journal of Veterinary Clinics
    • /
    • v.16 no.1
    • /
    • pp.118-127
    • /
    • 1999
  • The aim of this study was to establish the method of differential diagnosis between developing and regressing corpus luteum in cows. Plasma progesterone (P$_4$) concentrations were determined by radioimmunoassay in slaughtered, cycling and pregnant cows. Ultrasonography was used to measure the corpus luteum size and histogram values for determining the correlationships between corpus luteum area or histogram values and plasma P$_4$ concentrations. The corpora lutea were monitored in vitro (water-bath scanning) by using ultrasonography with 7.5 MHz linear-array transducer in 196 slaughtered cows. The correlation coefficient between corpus luteum area and plasma P$_4$ concentrations was 0.46 (p<0.01), and between histogram values and plasma P$_4$ concentrations was -0.44 (p<0.01), respectively. The corpora lutea were monitored by ultrasonography with 5.0 MHz linear-array transrectal transducer in 188 cycling and 30 pregnant cows. The corpus luteum areas and plasma P4 concentrations were significantly different between regressing and other corpora lutea (p<0.01), and also histogram values were significantly different between regressing and developing corpola lutea (p<0.01). The correlation coefficients between corpus luteum areas and plasma P$_4$ concentrations were 0.76 (p<0.01), 0.71 (p<0.01), 0.65 (p<0.05) and 0.68 (p<0.05), and between histogram values and plasma P$_4$ concentrations were 0.74 (p<0.05), 0.71 (p<0.01), -0.52 (p<0.05) and 0.65 (p<0.05) in developing, functional, regressing and pregnant corpora lutea, respectively. These results indicate that corpus luteum areas and plasma P$_4$ concentrations were highly correlated in all stages of corpus luteum. The histogram values and plasma P$_4$ concentrations were positive correlated in developing, functional and pregnant corpora lutea, but negative correlated in regressing corpus luteum. Therefore, the measurement of corpus luteum area and histogram value by ultrasonography is reliable method for the assessment of luteal function, specially developing and regressing corpus luteum.

  • PDF

An Analysis of Korean and American Presidential Addresses: Focusing on Punctuation and Transition

  • Jun, Ki-Suk;Jung, Kyu-Tae
    • English Language & Literature Teaching
    • /
    • v.17 no.2
    • /
    • pp.1-18
    • /
    • 2011
  • The object of this study is to show some features of English, focused on such mechanics as punctuation and transition, in Korean presidential addresses transcribed in English which are different from those of the United States. Towards that end, the presidential addresses of the United States and Korea from January, 2010 to June, 2010 are collected, made into corpora, and analyzed. Through analyzing the corpora, this paper is to address the following research questions: (1) What features can be regarded as different in terms of punctuation and transition? (2) If there are any differences between the corpora, are they significant enough to pose any problems for Korean and American English users to communicate with each other? (3) If so, what can be done to solve the problems in regard to pedagogical implications? Overall, as for punctuation, both Presidents' addresses share a lot in common, even with some idiosyncratic variations though. However, there are some noticeable differences in transitional devices. It is not clear whether those should be taken as a sign of personal preference, though. Transitional markers are meant to be part of wording in writing. (196 words).

  • PDF

A Study in Design and Construction of Structured Documents for Dialogue Corpus (대화형 코퍼스의 설계 및 구조적 문서화에 관한 연구)

  • Kang Chang-Qui;Nam Myung-Woo;Yang Ok-Yul
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.1-10
    • /
    • 2004
  • Dialogue speech corpora that contain sufficient dialogue speech features are needed for performance assessment of a spoken language dialogue system. And labeling information of dialogue speech corpora plays an important role for improvement of recognition rate in acoustic and language models. In this paper, we examine the methods by which labeling information of dialogue speech corpora can be structured. More specifically, we examined how to represent features of dialogue speech in a structured document based XML and how to design the repository system of the information.

  • PDF

Benford's Law in Linguistic Texts: Its Principle and Applications (언어 텍스트에 나타나는 벤포드 법칙: 원리와 응용)

  • Hong, Jung-Ha
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.145-163
    • /
    • 2010
  • This paper aims to propose that Benford's Law, non-uniform distribution of the leading digits in lists of numbers from many real-life sources, also appears in linguistic texts. The first digits in the frequency lists of morphemes from Sejong Morphologically Analyzed Corpora represent non-uniform distribution following Benford's Law, but showing complexity of numerical sources from complex systems like earthquakes. Benford's Law in texts is a principle reflecting regular distribution of low-frequency linguistic types, called LNRE(large number of rare events), and governing texts, corpora, or sample texts relatively independent of text sizes and the number of types. Although texts share a similar distribution pattern by Benford's Law, we can investigate non-uniform distribution slightly varied from text to text that provides useful applications to evaluate randomness of texts distribution focused on low-frequency types.

  • PDF

AutoCor: A Query Based Automatic Acquisition of Corpora of Closely-related Languages

  • Dimalen, Davis Muhajereen D.;Roxas, Rachel Edita O.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.146-154
    • /
    • 2007
  • AutoCor is a method for the automatic acquisition and classification of corpora of documents in closely-related languages. It is an extension and enhancement of CorpusBuilder, a system that automatically builds specific minority language corpora from a closed corpus, since some Tagalog documents retrieved by CorpusBuilder are actually documents in other closely-related Philippine languages. AutoCor used the query generation method odds ratio, and introduced the concept of common word pruning to differentiate between documents of closely-related Philippine languages and Tagalog. The performance of the system using with and without pruning are compared, and common word pruning was found to improve the precision of the system.

  • PDF

Addressing Low-Resource Problems in Statistical Machine Translation of Manual Signals in Sign Language (말뭉치 자원 희소성에 따른 통계적 수지 신호 번역 문제의 해결)

  • Park, Hancheol;Kim, Jung-Ho;Park, Jong C.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.163-170
    • /
    • 2017
  • Despite the rise of studies in spoken to sign language translation, low-resource problems of sign language corpus have been rarely addressed. As a first step towards translating from spoken to sign language, we addressed the problems arising from resource scarcity when translating spoken language to manual signals translation using statistical machine translation techniques. More specifically, we proposed three preprocessing methods: 1) paraphrase generation, which increases the size of the corpora, 2) lemmatization, which increases the frequency of each word in the corpora and the translatability of new input words in spoken language, and 3) elimination of function words that are not glossed into manual signals, which match the corresponding constituents of the bilingual sentence pairs. In our experiments, we used different types of English-American sign language parallel corpora. The experimental results showed that the system with each method and the combination of the methods improved the quality of manual signals translation, regardless of the type of the corpora.

Enhancement of a language model using two separate corpora of distinct characteristics

  • Cho, Sehyeong;Chung, Tae-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.357-362
    • /
    • 2004
  • Language models are essential in predicting the next word in a spoken sentence, thereby enhancing the speech recognition accuracy, among other things. However, spoken language domains are too numerous, and therefore developers suffer from the lack of corpora with sufficient sizes. This paper proposes a method of combining two n-gram language models, one constructed from a very small corpus of the right domain of interest, the other constructed from a large but less adequate corpus, resulting in a significantly enhanced language model. This method is based on the observation that a small corpus from the right domain has high quality n-grams but has serious sparseness problem, while a large corpus from a different domain has more n-gram statistics but incorrectly biased. With our approach, two n-gram statistics are combined by extending the idea of Katz's backoff and therefore is called a dual-source backoff. We ran experiments with 3-gram language models constructed from newspaper corpora of several million to tens of million words together with models from smaller broadcast news corpora. The target domain was broadcast news. We obtained significant improvement (30%) by incorporating a small corpus around one thirtieth size of the newspaper corpus.

Analyzing Errors in Bilingual Multi-word Lexicons Automatically Constructed through a Pivot Language

  • Seo, Hyeong-Won;Kim, Jae-Hoon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.39 no.2
    • /
    • pp.172-178
    • /
    • 2015
  • Constructing a bilingual multi-word lexicon is confronted with many difficulties such as an absence of a commonly accepted gold-standard dataset. Besides, in fact, there is no everybody's definition of what a multi-word unit is. In considering these problems, this paper evaluates and analyzes the context vector approach which is one of a novel alignment method of constructing bilingual lexicons from parallel corpora, by comparing with one of general methods. The approach builds context vectors for both source and target single-word units from two parallel corpora. To adapt the approach to multi-word units, we identify all multi-word candidates (namely noun phrases in this work) first, and then concatenate them into single-word units. As a result, therefore, we can use the context vector approach to satisfy our need for multi-word units. In our experimental results, the context vector approach has shown stronger performance over the other approach. The contribution of the paper is analyzing the various types of errors for the experimental results. For the future works, we will study the similarity measure that not only covers a multi-word unit itself but also covers its constituents.

A Protein-Protein Interaction Extraction Approach Based on Large Pre-trained Language Model and Adversarial Training

  • Tang, Zhan;Guo, Xuchao;Bai, Zhao;Diao, Lei;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.771-791
    • /
    • 2022
  • Protein-protein interaction (PPI) extraction from original text is important for revealing the molecular mechanism of biological processes. With the rapid growth of biomedical literature, manually extracting PPI has become more time-consuming and laborious. Therefore, the automatic PPI extraction from the raw literature through natural language processing technology has attracted the attention of the majority of researchers. We propose a PPI extraction model based on the large pre-trained language model and adversarial training. It enhances the learning of semantic and syntactic features using BioBERT pre-trained weights, which are built on large-scale domain corpora, and adversarial perturbations are applied to the embedding layer to improve the robustness of the model. Experimental results showed that the proposed model achieved the highest F1 scores (83.93% and 90.31%) on two corpora with large sample sizes, namely, AIMed and BioInfer, respectively, compared with the previous method. It also achieved comparable performance on three corpora with small sample sizes, namely, HPRD50, IEPA, and LLL.