• Title/Summary/Keyword: Sentence Generation

Search Result 106, Processing Time 0.026 seconds

Multi-Topic Meeting Summarization using Lexical Co-occurrence Frequency and Distribution (어휘의 동시 발생 빈도와 분포를 이용한 다중 주제 회의록 요약)

  • Lee, Byung-Soo;Lee, Jee-Hyong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.13-16
    • /
    • 2015
  • 본 논문에서는 어휘의 동시 발생 (co-occurrence) 빈도와 분포를 이용한 회의록 요약방법을 제안한다. 회의록은 일반 문서와 달리 문서에 여러 세부적인 주제들이 나타나며, 잘못된 형식의 문장, 불필요한 잡담들을 포함하고 있기 때문에 이러한 특징들이 문서요약 과정에서 고려되어야 한다. 기존의 일반적인 문서요약 방법은 하나의 주제를 기반으로 문서 전체에서 가장 중요한 문장으로 요약하기 때문에 다중 주제 회의록 요약에는 적합하지 않다. 제안한 방법은 먼저 어휘의 동시 발생 (co-occurrence) 빈도를 이용하여 회의록 분할 (segmentation) 과정을 수행한다. 다음으로 주제의 구분에 따라 분할된 각 영역 (block)의 중요 단어 집합 생성, 중요 문장 추출 과정을 통해 회의록의 중요 문장들을 선별한다. 마지막으로 추출된 중요 문장들의 위치, 종속 관계를 고려하여 최종적으로 회의록을 요약한다. AMI meeting corpus를 대상으로 실험한 결과, 제안한 방법이 baseline 요약 방법들보다 요약 비율에 따른 평가 및 요약문의 세부 주제별 평가에서 우수한 요약 성능을 보임을 확인하였다.

  • PDF

Sentence generation on sequential multi-modal data using random hypergraph model (랜덤 하이퍼그래프 모델을 이용한 순차적 멀티모달 데이터에서의 문장 생성)

  • Yoon, Woong-Chang;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.376-379
    • /
    • 2010
  • 인간의 학습과 기억현상에 있어서 멀티모달 데이터를 사용하는 것은 단순 모달리티 데이터를 사용하는 것에 비해서 향상된 효과를 보인다는 여러 연구 결과가 있어왔다. 이 논문에서는 인간의 순차적인 정보처리와 생성현상을 기계에서의 시뮬레이션을 통해서 기계학습에 있어서도 동일한 현상이 나타나는지에 대해서 알아보고자 하였다. 이를 위해서 가중치를 가진 랜덤 하이퍼그래프 모델을 통해서 순차적인 멀티모달 데이터의 상호작용을 하이퍼에지들의 조합으로 나타내는 것을 제안 하였다. 이러한 제안의 타당성을 알아보기 위해서 비디오 데이터를 이용한 문장생성을 시도하여 보았다. 이전 장면의 사진과 문장을 주고 다음 문장의 생성을 시도하였으며, 단순 암기학습이나 주어진 룰을 통하지 않고 의미 있는 실험 결과를 얻을 수 있었다. 단순 텍스트와 텍스트-이미지 쌍의 단서를 통한 실험을 통해서 멀티 모달리티가 단순 모달리티에 비해서 미치는 영향을 보였으며, 한 단계 이전의 멀티모달 단서와 두 단계 및 한 단계 이전의 멀티모달 단서를 통한 실험을 통해서 순차적 데이터의 단계별 단서의 차이에 따른 영향을 알아볼 수 있었다. 이를 통하여 멀티 모달리티가 시공간적으로 미치는 기계학습에 미치는 영향과 순차적 데이터의 시간적 누적에 따른 효과가 어떻게 나타날 수 있는지에 대한 실마리를 제공할 수 있었다고 생각된다.

  • PDF

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.

A Sentence Generation System for Multiple Choice Test with Automatic Control of Difficulty Degree (난이도 자동제어가 구현된 객관식 문항 생성 시스템)

  • Kim, Young-Bum;Kim, Yu-Seop
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.1404-1407
    • /
    • 2007
  • 본 논문에서는 객관식 문항을 난이도에 따라 자동으로 생성하는 방법을 고안하여, 학습자 수준에 적합하도록 다양하고 동적인 형태로 문항 제시를 할 수 있는 시스템을 제안하였다. 이를 위해서는 주어진 문장에서 형태소 분석을 통해 키워드를 추출하고, 각 키워드에 대하여 워드넷의 계층적 특성에 따라 의미가 유사한 후보 단어를 제시한다. 의미 유사 후보 단어를 제시할 때, 워드넷에서의 어휘간 유사도 측정 방법을 사용함으로써 생성된 문항의 난이도를 사용자가 원하는 수준으로 조정할 수 있도록 하였다. 단어의 의미 유사도는 동의어를 의미하는 수준 0에서 거의 유사도를 찾을 수 없는 수준 9 까지 다양하게 제시할 수 있으며, 이를 조절함으로써 문항의 전체 난이도를 조절할 수 있다. 후보 어휘들의 의미 유사도 측정을 위해서, 본 논문에서는 두 가지 방법을 사용하여 구현하였다. 첫째는 단순히 두 어휘의 워드넷 상에서의 거리만을 고려한 것이고 둘째는 두 어휘가 워드넷에서 차지하는 비중까지 추가적으로 고려한 것이다. 이러한 방법을 통하여 실제 출제자가 기존에 출제된 문제를 토대로 보다 다양한 내용과 난이도를 가진 문제 또는 문항을 보다 쉽게 출제하게 함으로써 출제에 소요되는 비용을 줄일 수 있었다.

Study on the realization of pause groups and breath groups (휴지 단위와 호흡 단위의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.19-31
    • /
    • 2020
  • The purpose of this study is to observe the realization of pause and breath groups from adult speakers and to examine how gender, generation, and tasks can affect this realization. For this purpose, we analyzed forty-eight male or female speakers. Their generation was divided into two groups: young, old. Task and gender affected both the realization of pause and breath groups. The length of the pause groups was longer in the read speech than in the spontaneous speech and female speech. On the other hand, the length of the breath group was longer in the spontaneous speech and the male speech. In the spontaneous speech, which requires planning, the speaker produced shorter length of pause group. The short sentence length of the reading material influenced the reason for which the length of the breath group was shorter in the reading speech. Gender difference resulted from difference in pause patterns between genders. In the case of the breath groups, the male speaker produced longer duration of pause than the female speaker did, which may be due to difference in lung capacity between genders. On the other hand, generation did not affect either the pause groups or the breath groups. The generation factor only influenced the number of syllables and the eojeols, which can be interpreted as the result of the difference in speech rate between generations.

Development of Block-based Code Generation and Recommendation Model Using Natural Language Processing Model (자연어 처리 모델을 활용한 블록 코드 생성 및 추천 모델 개발)

  • Jeon, In-seong;Song, Ki-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.3
    • /
    • pp.197-207
    • /
    • 2022
  • In this paper, we develop a machine learning based block code generation and recommendation model for the purpose of reducing cognitive load of learners during coding education that learns the learner's block that has been made in the block programming environment using natural processing model and fine-tuning and then generates and recommends the selectable blocks for the next step. To develop the model, the training dataset was produced by pre-processing 50 block codes that were on the popular block programming language web site 'Entry'. Also, after dividing the pre-processed blocks into training dataset, verification dataset and test dataset, we developed a model that generates block codes based on LSTM, Seq2Seq, and GPT-2 model. In the results of the performance evaluation of the developed model, GPT-2 showed a higher performance than the LSTM and Seq2Seq model in the BLEU and ROUGE scores which measure sentence similarity. The data results generated through the GPT-2 model, show that the performance was relatively similar in the BLEU and ROUGE scores except for the case where the number of blocks was 1 or 17.

Metamodeling Construction for Generating Test Case via Decision Table Based on Korean Requirement Specifications (한글 요구사항 기반 결정 테이블로부터 테스트 케이스 생성을 위한 메타모델링 구축화)

  • Woo Sung Jang;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.381-386
    • /
    • 2023
  • Many existing test case generation researchers extract test cases from models. However, research on generating test cases from natural language requirements is required in practice. For this purpose, the combination of natural language analysis and requirements engineering is very necessary. However, Requirements analysis written in Korean is difficult due to the diverse meaning of sentence expressions. We research test case generation through natural language requirement definition analysis, C3Tree model, cause-effect graph, and decision table steps as one of the test case generation methods from Korean natural requirements. As an intermediate step, this paper generates test cases from C3Tree model-based decision tables using meta-modeling. This method has the advantage of being able to easily maintain the model-to-model and model-to-text transformation processes by modifying only the transformation rules. If an existing model is modified or a new model is added, only the model transformation rules can be maintained without changing the program algorithm. As a result of the evaluation, all combinations for the decision table were automatically generated as test cases.

Natural Photography Generation with Text Guidance from Spherical Panorama Image (360 영상으로부터 텍스트 정보를 이용한 자연스러운 사진 생성)

  • Kim, Beomseok;Jung, Jinwoong;Hong, Eunbin;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.65-75
    • /
    • 2017
  • As a 360-degree image carries information of all directions, it often has too much information. Moreover, in order to investigate a 360-degree image on a 2D display, a user has to either click and drag the image with a mouse, or project it to a 2D panorama image, which inevitably introduces severe distortions. In consequence, investigating a 360-degree image and finding an object of interest in such a 360-degree image could be a tedious task. To resolve this issue, this paper proposes a method to find a region of interest and produces a 2D naturally looking image from a given 360-degree image that best matches a description given by a user in a natural language sentence. Our method also considers photo composition so that the resulting image is aesthetically pleasing. Our method first converts a 360-degree image to a 2D cubemap. As objects in a 360-degree image may appear distorted or split into multiple pieces in a typical cubemap, leading to failure of detection of such objects, we introduce a modified cubemap. Then our method applies a Long Short Term Memory (LSTM) network based object detection method to find a region of interest with a given natural language sentence. Finally, our method produces an image that contains the detected region, and also has aesthetically pleasing composition.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.