• Title/Summary/Keyword: sentence model

Search Result 302, Processing Time 0.023 seconds

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF

Method of Extracting the Topic Sentence Considering Sentence Importance based on ELMo Embedding (ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출 방법)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.39-46
    • /
    • 2021
  • This study is about a method of extracting a summary from a news article in consideration of the importance of each sentence constituting the article. We propose a method of calculating sentence importance by extracting the probabilities of topic sentence, similarity with article title and other sentences, and sentence position as characteristics that affect sentence importance. At this time, a hypothesis is established that the Topic Sentence will have a characteristic distinct from the general sentence, and a deep learning-based classification model is trained to obtain a topic sentence probability value for the input sentence. Also, using the pre-learned ELMo language model, the similarity between sentences is calculated based on the sentence vector value reflecting the context information and extracted as sentence characteristics. The topic sentence classification performance of the LSTM and BERT models was 93% accurate, 96.22% recall, and 89.5% precision, resulting in high analysis results. As a result of calculating the importance of each sentence by combining the extracted sentence characteristics, it was confirmed that the performance of extracting the topic sentence was improved by about 10% compared to the existing TextRank algorithm.

A study on the Stochastic Model for Sentence Speech Understanding (문장음성 이해를 위한 확률모델에 관한 연구)

  • Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.829-836
    • /
    • 2003
  • In this paper, we propose a stochastic model for sentence speech understanding using dictionary and thesaurus. The proposed model extracts words from an input speech or text into a sentence. A computer is sellected category of dictionary database compared the word extracting from the input sentence calculating a probability value to the compare results from stochastic model. At this time, computer read out upper dictionary information from the upper dictionary searching and extracting word compared input sentence caluclating value to the compare results from stochastic model. We compare adding the first and second probability value from the dictionary searching and the upper dictionary searching with threshold probability that we measure the sentence understanding rate. We evaluated the performance of the sentence speech understanding system by applying twenty questions game. As the experiment results, we got sentence speech understanding accuracy of 79.8%. In this case, probability ($\alpha$) of high level word is 0.9 and threshold probability ($\beta$) is 0.38.

Building a Sentential Model for Automatic Prosody Evaluation

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.47-59
    • /
    • 2009
  • The purpose of this paper is to propose an automatic evaluation technique for the prosodic aspect of an English sentence uttered by Korean speakers learning English. The underlying hypothesis is that the consistency of the manual prosody scoring is reflected in an imaginary space of prosody evaluation model constructed out of the three physical properties of the prosody considered in this paper, namely: the fundamental frequency (F0) contour, the intensity contour, and the segmental durations. The evaluation proceeds first by building a prosody evaluation model for the sentence. For the creation of the model, utterances from native speakers of English and Korean learners for the target sentence are manually scored by either native teachers of English or Korean phoneticians in terms of their prosody. Multiple native utterances from the manual scoring are selected as the "model" native utterances against which all the other Korean learners' utterances as well as the model utterances themselves can be semi-automatically evaluated by comparison in terms of the three prosodic aspects [7]. Each learner utterance, when compared to the multiple model native utterances, produces multiple coordinates in a three-dimensional space of prosody evaluation, each axis of which corresponds to the three prosodic aspects. The 3D coordinates from all the comparisons form a prosody evaluation model for the particular sentence and the associated manual scores can display regions of particular scores. The model can then be used as a predictive model against which other Korean utterances of the target sentence can be evaluated. The model from a Korean phonetician appears to support the hypothesis.

  • PDF

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

Development of Real Time Information Service Model Using Smart Phone Lock Screen (스마트 폰 잠금 화면을 통한 실시간 정보제공 서비스 모델의 개발)

  • Oh, Sung-Jin;Jang, Jin-Wook
    • Journal of Information Technology Services
    • /
    • v.13 no.3
    • /
    • pp.323-331
    • /
    • 2014
  • This research is based on real-time service model that uses lock screen of smart devices which is mostly exposed to device users. The potential for lock screen space is immense due to their exposing time for user. The effect can be maximized by offering useful information contents on lock screen. This service model offers real-time keyword with abridged sentence. They match real-time keyword with news by using text matching algorithm and extracts kernel sentence from news to provide short sentence to user. News from the lock screen to match real-time query sentence, and then only to the original core of the ability to move a user evaluation was conducted after adding. The report provided a key statement users feel the lack of original Not if you go to an average of 5.71%. Most algorithms allow only real-time zoom key sentence extracted keywords can accurately determine the reason for that was confirmed.

Statistical Generation of Korean Chatting Sentences Using Multiple Feature Information (복합 자질 정보를 이용한 통계적 한국어 채팅 문장 생성)

  • Kim, Jong-Hwan;Chang, Du-Seong;Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.4
    • /
    • pp.421-437
    • /
    • 2009
  • A chatting system is a computer program that simulates conversations between a human and a computer using natural language. In this paper, we propose a statistical model to generate natural chatting sentences when keywords and speech acts are input. The proposed model first finds Eojeols (Korean spacing units) including input keywords from a corpus, and generate sentence candidates by using appearance information and syntactic information of Eojeols surrounding the found Eojeols. Then, the proposed model selects one among the sentence candidates by using a language model based on speech act information, co-occurrence information between Eojeols, and syntactic information of each Eojeol. In the experiment, the proposed model showed the better correct sentence generation rate of 86.2% than a previous conventional model based on a simple language model.

  • PDF

A Study on Image Generation from Sentence Embedding Applying Self-Attention (Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구)

  • Yu, Kyungho;No, Juhyeon;Hong, Taekeun;Kim, Hyeong-Ju;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.63-69
    • /
    • 2021
  • When a person sees a sentence and understands the sentence, the person understands the sentence by reminiscent of the main word in the sentence as an image. Text-to-image is what allows computers to do this associative process. The previous deep learning-based text-to-image model extracts text features using Convolutional Neural Network (CNN)-Long Short Term Memory (LSTM) and bi-directional LSTM, and generates an image by inputting it to the GAN. The previous text-to-image model uses basic embedding in text feature extraction, and it takes a long time to train because images are generated using several modules. Therefore, in this research, we propose a method of extracting features by using the attention mechanism, which has improved performance in the natural language processing field, for sentence embedding, and generating an image by inputting the extracted features into the GAN. As a result of the experiment, the inception score was higher than that of the model used in the previous study, and when judged with the naked eye, an image that expresses the features well in the input sentence was created. In addition, even when a long sentence is input, an image that expresses the sentence well was created.