• Title/Summary/Keyword: natural language generation

검색결과 134건 처리시간 0.029초

A Survey of Automatic Code Generation from Natural Language

  • Shin, Jiho;Nam, Jaechang
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.537-555
    • /
    • 2021
  • Many researchers have carried out studies related to programming languages since the beginning of computer science. Besides programming with traditional programming languages (i.e., procedural, object-oriented, functional programming language, etc.), a new paradigm of programming is being carried out. It is programming with natural language. By programming with natural language, we expect that it will free our expressiveness in contrast to programming languages which have strong constraints in syntax. This paper surveys the approaches that generate source code automatically from a natural language description. We also categorize the approaches by their forms of input and output. Finally, we analyze the current trend of approaches and suggest the future direction of this research domain to improve automatic code generation with natural language. From the analysis, we state that researchers should work on customizing language models in the domain of source code and explore better representations of source code such as embedding techniques and pre-trained models which have been proved to work well on natural language processing tasks.

A Frame-based Approach to Text Generation

  • Le, Huong Thanh
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.192-201
    • /
    • 2007
  • This paper is a study on constructing a natural language interface to database, concentrating on generating textual answers. TGEN, a system that generates textual answer from query result tables is presented. The TGEN architecture guarantees its portability across domains. A combination of a frame-based approach and natural language generation techniques in the TGEN provides text fluency and text flexibility. The implementation result shows that this approach is feasible while a deep NLG approach is still far to be reached.

  • PDF

언어함수를 이용한 영문 생성기의 구현에 관한 연구 (A study on Implementation of English Sentence Generator using Lexical Functions)

  • 정희연;김희연;이웅재
    • 인터넷정보학회논문지
    • /
    • 제1권2호
    • /
    • pp.49-59
    • /
    • 2000
  • 컴퓨터의 발달과 인터넷 사용자의 증대로 자연어 처리의 연구에 관한 관심이 증대되고 있다. 그러나 대부분의 연구가 자연어 분석 및 이해에 집중되고 있어 자연어 생성에 관한 연구는 주목을 받지 못해 왔으며 자연어 생성을 자연어 분석의 역 과정으로 간단하게 생각하는 경향마저도 있다. 하지만 Web상에서의 다국어간 번역 자연어 인터페이스 자연어 검색 시스템 등 자연어처리에 관한 필요성이 증가함에 따라 자연어 생성의 필요성도 자연히 증가하고 있는 실정이며 좀 더 체계적인 자연어 생성 시스템 개발을 위해서는 자연어 생성에 관한 보다 구체적인 알고리즘에 관한 연구가 필요하다. 본 논문에서는 영문 생성에 있어서 보다 자연스러운 문장을 생성하기 위한 알고리즘을 제안하며 특히 Igor Mel'uk (Mel'uk & Zholkovsky, 1988)의 어휘 함수(LFs)를 이용한 어휘 결합을 통하여 절 길이의 설명문을 생성하는 영문 생성기의 구현에 대하여 논한다.

  • PDF

Subword Neural Language Generation with Unlikelihood Training

  • Iqbal, Salahuddin Muhammad;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권2호
    • /
    • pp.45-50
    • /
    • 2020
  • A Language model with neural networks commonly trained with likelihood loss. Such that the model can learn the sequence of human text. State-of-the-art results achieved in various language generation tasks, e.g., text summarization, dialogue response generation, and text generation, by utilizing the language model's next token output probabilities. Monotonous and boring outputs are a well-known problem of this model, yet only a few solutions proposed to address this problem. Several decoding techniques proposed to suppress repetitive tokens. Unlikelihood training approached this problem by penalizing candidate tokens probabilities if the tokens already seen in previous steps. While the method successfully showed a less repetitive generated token, the method has a large memory consumption because of the training need a big vocabulary size. We effectively reduced memory footprint by encoding words as sequences of subword units. Finally, we report competitive results with token level unlikelihood training in several automatic evaluations compared to the previous work.

엔터테인먼트 데이터를 위한 자연어 검색시스템 (A Natural Language Retrieval System for Entertainment Data)

  • 김정인
    • 한국멀티미디어학회논문지
    • /
    • 제18권1호
    • /
    • pp.52-64
    • /
    • 2015
  • Recently, as the quality of life has been improving, search items in the area of entertainment represent an increasing share of the total usage of Internet portal sites. Information retrieval in the entertainment area is mainly depending on keywords that users are inputting, and the results of information retrieval are the contents that contain those keywords. In this paper, we propose a search method that takes natural language inputs and retrieves the database pertaining to entertainment. The main components of our study are the simple Korean morphological analyzer using case particle information, predicate-oriented token generation, standardized pattern generation coherent to tokens, and automatic generation of the corresponding SQL queries. We also propose an efficient retrieval system that searches the most relevant results from the database in terms of natural language querying, especially in the restricted domain of music, and shows the effectiveness of our system.

한국어-수화 번역시스템을 위한 형태소 변환 (Morpheme Conversion for korean Text-to-Sign Language Translation System)

  • 박수현;강석훈;권혁철
    • 한국정보처리학회논문지
    • /
    • 제5권3호
    • /
    • pp.688-702
    • /
    • 1998
  • 본 논문에서는 한국어 각 품사별로 형태소 해석 규칙에 대응하는 수화 형태소 생성규칙을 제안한다. 한국어 자연수화는 한국어 자연언어에 비하여 극히 한정된 어휘를 가지며, 문법 요소의 수도 매우 한정적으로 사용되고 있다. 따라서 본 논문에서는 자연스러운 한국어 문장을 대응하는 수화로 변환시키기 위해서 한국어 문법에 대응하는 자연수화 문법을 정의한다. 각 phrase는 한국어 해석 문법과는 별도의 수화 형태소 생성문법을 정의 해야 하며, 이 문법은 형태소 해석/결합 규칙 및 구구조 해석규칙에 적용되고, 이 규칙의 정의로 가장 자연스러운 자연수화를 생성할 수 있게 된다.

  • PDF

Best Practice on Automatic Toon Image Creation from JSON File of Message Sequence Diagram via Natural Language based Requirement Specifications

  • Hyuntae Kim;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • 제13권1호
    • /
    • pp.99-107
    • /
    • 2024
  • In AI image generation tools, most general users must use an effective prompt to craft queries or statements to elicit the desired response (image, result) from the AI model. But we are software engineers who focus on software processes. At the process's early stage, we use informal and formal requirement specifications. At this time, we adapt the natural language approach into requirement engineering and toon engineering. Most Generative AI tools do not produce the same image in the same query. The reason is that the same data asset is not used for the same query. To solve this problem, we intend to use informal requirement engineering and linguistics to create a toon. Therefore, we propose a sequence diagram and image generation mechanism by analyzing and applying key objects and attributes as an informal natural language requirement analysis. Identify morpheme and semantic roles by analyzing natural language through linguistic methods. Based on the analysis results, a sequence diagram and an image are generated through the diagram. We expect consistent image generation using the same image element asset through the proposed mechanism.

Framework for evaluating code generation ability of large language models

  • Sangyeop Yeo;Yu-Seung Ma;Sang Cheol Kim;Hyungkook Jun;Taeho Kim
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.106-117
    • /
    • 2024
  • Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric, pass-ratio@n, which captures the granularity of accuracy according to the pass rate of test cases. The framework is intended to be fully automatic to handle the repetitive work involved in generating prompts, conducting inferences, and executing the generated codes. A preliminary evaluation focusing on the prompt detail, problem publication date, and difficulty level demonstrates the successful integration of our framework with the LeetCode coding platform and highlights the applicability of the pass-ratio@n metric.

Development of Knowledge Code Converter for Design Knowledge Management

  • Nomaguchi, Yutaka;Shimomura, Yoshiki
    • International Journal of CAD/CAM
    • /
    • 제5권1호
    • /
    • pp.83-90
    • /
    • 2005
  • This is a report on a new methodology to manage design knowledge by utilizing a knowledge-based CAD and a prototype system named $C^3$ (Cubic; CAD knowledge Code Capacitor), which is being developed using our methodology. $C^3$ facilitates (i) the automatic generation of a knowledge code for a knowledge-based CAD by processing design documents written in the format near the natural language, such as English or Japanese, and (ii) automatically generation of a design document written in the format near the natural language from the knowledge code. The features of the system facilitate document-based design knowledge management which reduces the designer's load to encode and maintain design knowledge, because it is easier for a designer to treat a natural language description than a coded description.

Kant 시스템에서의 한국어 생성을 위한 언어 정보의 구축 (Construction of Korean Linguistic Information for the Korean Generation on KANT)

  • 윤덕호
    • 한국정보처리학회논문지
    • /
    • 제6권12호
    • /
    • pp.3539-3547
    • /
    • 1999
  • KANT(Knowledge-based Accurate Natural language Translation) 시스템 생성 엔진을 위한 한국어 언어 정보를 구축하였다. KANT 시스템은 언어 중립적인 생성 엔진을 갖고 있기 때문에 한국어 언어 정보의 구축은 사실상 한국어 생성 모듈의 개발을 의미한다. 구축된 언어 정보는 개념별 한국어 대응 규칙, 범주별 한국어 대응 규칙, 한국어 사전 및 템플리트 선언, 한국어 문법 규칙, 한국어 어휘 유형, 한국어 어휘 규칙, 한국어 다시 쓰기 규칙 등으로 구성된다. 구축된 언어 정보를 이용해 KANT 시스템 개발 측이 준비한 118 문장 분량의 중간 언어 표현로부터 106 문장을 올바르며 완전한 한국어 문장으로서 생성하였다.

  • PDF