• 제목/요약/키워드: Neural language generation

검색결과 28건 처리시간 0.022초

Subword Neural Language Generation with Unlikelihood Training

  • Iqbal, Salahuddin Muhammad;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권2호
    • /
    • pp.45-50
    • /
    • 2020
  • A Language model with neural networks commonly trained with likelihood loss. Such that the model can learn the sequence of human text. State-of-the-art results achieved in various language generation tasks, e.g., text summarization, dialogue response generation, and text generation, by utilizing the language model's next token output probabilities. Monotonous and boring outputs are a well-known problem of this model, yet only a few solutions proposed to address this problem. Several decoding techniques proposed to suppress repetitive tokens. Unlikelihood training approached this problem by penalizing candidate tokens probabilities if the tokens already seen in previous steps. While the method successfully showed a less repetitive generated token, the method has a large memory consumption because of the training need a big vocabulary size. We effectively reduced memory footprint by encoding words as sequences of subword units. Finally, we report competitive results with token level unlikelihood training in several automatic evaluations compared to the previous work.

Recurrent Neural Network를 이용한 이미지 캡션 생성 (Image Caption Generation using Recurrent Neural Network)

  • 이창기
    • 정보과학회 논문지
    • /
    • 제43권8호
    • /
    • pp.878-882
    • /
    • 2016
  • 이미지의 내용을 설명하는 캡션을 자동으로 생성하는 기술은 이미지 인식과 자연어처리 기술을 필요로 하는 매우 어려운 기술이지만, 유아 교육이나 이미지 검색, 맹인들을 위한 네비게이션 등에 사용될 수 있는 중요한 기술이다. 본 논문에서는 이미지 캡션 생성을 위해 Convolutional Neural Network(CNN)으로 인코딩된 이미지 정보를 입력으로 갖는 이미지 캡션 생성에 최적화된 Recurrent Neural Network(RNN) 모델을 제안하고, 실험을 통해 본 논문에서 제안한 모델이 Flickr 8K와 Flickr 30K, MS COCO 데이터 셋에서 기존의 연구들보다 높은 성능을 얻음을 보인다.

신경회로망을 이용한 조합 논리회로의 테스트 생성 (Test Generation for Combinational Logic Circuits Using Neural Networks)

  • 김영우;임인칠
    • 전자공학회논문지A
    • /
    • 제30A권9호
    • /
    • pp.71-79
    • /
    • 1993
  • This paper proposes a new test pattern generation methodology for combinational logic circuits using neural networks based on a modular structure. The CUT (Circuit Under Test) is described in our gate level hardware description language. By conferring neural database, the CUT is compiled to an ATPG (Automatic Test Pattern Generation) neural network. Each logic gate in CUT is represented as a discrete Hopfield network. Such a neual network is called a gate module in this paper. All the gate modules for a CUT form an ATPG neural network by connecting each module through message passing paths by which the states of modules are transferred to their adjacent modules. A fault is injected by setting the activation values of some neurons at given values and by invalidating connections between some gate modules. A test pattern for an injected fault is obtained when all gate modules in the ATPG neural network are stabilized through evolution and mutual interactions. The proposed methodology is efficient for test generation, known to be NP-complete, through its massive paralelism. Some results on combinational logic circuits confirm the feasibility of the proposed methodology.

  • PDF

인공신경망 기계번역에서 디코딩 전략에 대한 연구 (Study on Decoding Strategies in Neural Machine Translation)

  • 서재형;박찬준;어수경;문현석;임희석
    • 한국융합학회논문지
    • /
    • 제12권11호
    • /
    • pp.69-80
    • /
    • 2021
  • 딥러닝 모델을 활용한 인공신경망 기계번역 (Neural machine translation)이 주류 분야로 떠오르면서 최고의 성능을 위해 모델과 데이터 언어 쌍에 대한 많은 투자와 연구가 활발하게 진행되고 있다. 그러나, 최근 대부분의 인공신경망 기계번역 연구들은 번역 문장의 품질을 극대화하는 자연어 생성을 위한 디코딩 전략 (Decoding strategy)에 대해서는 미래 연구 과제로 남겨둔 채 다양한 실험과 구체적인 분석이 부족한 상황이다. 기계번역에서 디코딩 전략은 번역 문장을 생성하는 과정에서 탐색 경로를 최적화 하고, 모델 변경 및 데이터 확장 없이도 성능 개선이 가능하다. 본 논문은 시퀀스 투 시퀀스 (Sequence to Sequence) 모델을 활용한 신경망 기반의 기계번역에서 고전적인 그리디 디코딩 (Greedy decoding)부터 최신의 방법론인 Dynamic Beam Allocation (DBA)까지 비교 분석하여 디코딩 전략의 효과와 그 의의를 밝힌다.

Best Practice on Automatic Toon Image Creation from JSON File of Message Sequence Diagram via Natural Language based Requirement Specifications

  • Hyuntae Kim;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • 제13권1호
    • /
    • pp.99-107
    • /
    • 2024
  • In AI image generation tools, most general users must use an effective prompt to craft queries or statements to elicit the desired response (image, result) from the AI model. But we are software engineers who focus on software processes. At the process's early stage, we use informal and formal requirement specifications. At this time, we adapt the natural language approach into requirement engineering and toon engineering. Most Generative AI tools do not produce the same image in the same query. The reason is that the same data asset is not used for the same query. To solve this problem, we intend to use informal requirement engineering and linguistics to create a toon. Therefore, we propose a sequence diagram and image generation mechanism by analyzing and applying key objects and attributes as an informal natural language requirement analysis. Identify morpheme and semantic roles by analyzing natural language through linguistic methods. Based on the analysis results, a sequence diagram and an image are generated through the diagram. We expect consistent image generation using the same image element asset through the proposed mechanism.

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • 제28권2E호
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.

LSTM 언어모델 기반 한국어 문장 생성 (LSTM Language Model Based Korean Sentence Generation)

  • 김양훈;황용근;강태관;정교민
    • 한국통신학회논문지
    • /
    • 제41권5호
    • /
    • pp.592-601
    • /
    • 2016
  • 순환신경망은 순차적이거나 길이가 가변적인 데이터에 적합한 딥러닝 모델이다. LSTM은 순환신경망에서 나타나는 기울기 소멸문제를 해결함으로써 시퀀스 구성 요소간의 장기의존성을 유지 할 수 있다. 본 논문에서는 LSTM에 기반한 언어모델을 구성하여, 불완전한 한국어 문장이 입력으로 주어졌을 때 뒤 이어 나올 단어들을 예측하여 완전한 문장을 생성할 수 있는 방법을 제안한다. 제안된 방법을 평가하기 위해 여러 한국어 말뭉치를 이용하여 모델을 학습한 다음, 한국어 문장의 불완전한 부분을 생성하는 실험을 진행하였다. 실험 결과, 제시된 언어모델이 자연스러운 한국어 문장을 생성해 낼 수 있음을 확인하였다. 또한 문장 최소 단위를 어절로 설정한 모델이 다른 모델보다 문장 생성에서 더 우수한 결과를 보임을 밝혔다.

Contextual Modeling in Context-Aware Conversation Systems

  • Quoc-Dai Luong Tran;Dinh-Hong Vu;Anh-Cuong Le;Ashwin Ittoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권5호
    • /
    • pp.1396-1412
    • /
    • 2023
  • Conversation modeling is an important and challenging task in the field of natural language processing because it is a key component promoting the development of automated humanmachine conversation. Most recent research concerning conversation modeling focuses only on the current utterance (considered as the current question) to generate a response, and thus fails to capture the conversation's logic from its beginning. Some studies concatenate the current question with previous conversation sentences and use it as input for response generation. Another approach is to use an encoder to store all previous utterances. Each time a new question is encountered, the encoder is updated and used to generate the response. Our approach in this paper differs from previous studies in that we explicitly separate the encoding of the question from the encoding of its context. This results in different encoding models for the question and the context, capturing the specificity of each. In this way, we have access to the entire context when generating the response. To this end, we propose a deep neural network-based model, called the Context Model, to encode previous utterances' information and combine it with the current question. This approach satisfies the need for context information while keeping the different roles of the current question and its context separate while generating a response. We investigate two approaches for representing the context: Long short-term memory and Convolutional neural network. Experiments show that our Context Model outperforms a baseline model on both ConvAI2 Dataset and a collected dataset of conversational English.

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

이미지 캡션 생성을 위한 심층 신경망 모델의 설계 (Design of a Deep Neural Network Model for Image Caption Generation)

  • 김동하;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제6권4호
    • /
    • pp.203-210
    • /
    • 2017
  • 본 논문에서는 이미지 캡션 생성과 모델 전이에 효과적인 심층 신경망 모델을 제시한다. 본 모델은 멀티 모달 순환 신경망 모델의 하나로서, 이미지로부터 시각 정보를 추출하는 컨볼루션 신경망 층, 각 단어를 저차원의 특징으로 변환하는 임베딩 층, 캡션 문장 구조를 학습하는 순환 신경망 층, 시각 정보와 언어 정보를 결합하는 멀티 모달 층 등 총 5 개의 계층들로 구성된다. 특히 본 모델에서는 시퀀스 패턴 학습과 모델 전이에 우수한 LSTM 유닛을 이용하여 순환 신경망 층을 구성하며, 캡션 문장 생성을 위한 매 순환 단계마다 이미지의 시각 정보를 이용할 수 있도록 컨볼루션 신경망 층의 출력을 순환 신경망 층의 초기 상태뿐만 아니라 멀티 모달 층의 입력에도 연결하는 구조를 가진다. Flickr8k, Flickr30k, MSCOCO 등의 공개 데이터 집합들을 이용한 다양한 비교 실험들을 통해, 캡션의 정확도와 모델 전이의 효과 면에서 본 논문에서 제시한 멀티 모달 순환 신경망 모델의 높은 성능을 확인할 수 있었다.