• Title/Summary/Keyword: language-generation

Search Result 754, Processing Time 0.022 seconds

Generational Differences in Ethnicity Maintenance of Korean-Chinese Ethnic Minority

  • Cho, Bok-Hee;Lee, Joo-Yeon
    • International Journal of Human Ecology
    • /
    • v.8 no.1
    • /
    • pp.95-107
    • /
    • 2007
  • The present study examined generational differences in ethnicity maintenance among Korean-Chinese to understand the impact of recent social change on a Korean-Chinese ethnic community in China. A total of 1355 Korean-Chinese (557 parents and 798 children), who live in Shenyang, China, participated in this study. The subjects were asked about their language use during daily conversations and cultural activities. They were also asked about their ethnic identity and perceptions of social distance from Chinese people. The results reveal that the Korean-Chinese parent generation is more likely to maintain its ethnic language, while the child generation is more likely to maintain its ethnic culture. Second, more parents than children considered themselves as 'Korean-Chinese' rather than 'Chinese'. Third, members of the child generation show less social distance from Chinese people than do the parent generation. These results show a strong tendency towards ethnicity maintenance among Korean Chinese as well as recent changes in the community. This study argues for the importance of school education and school environment in maintaining the ethnic language and culture of Korean-Chinese children.

Best Practice on Automatic Toon Image Creation from JSON File of Message Sequence Diagram via Natural Language based Requirement Specifications

  • Hyuntae Kim;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • In AI image generation tools, most general users must use an effective prompt to craft queries or statements to elicit the desired response (image, result) from the AI model. But we are software engineers who focus on software processes. At the process's early stage, we use informal and formal requirement specifications. At this time, we adapt the natural language approach into requirement engineering and toon engineering. Most Generative AI tools do not produce the same image in the same query. The reason is that the same data asset is not used for the same query. To solve this problem, we intend to use informal requirement engineering and linguistics to create a toon. Therefore, we propose a sequence diagram and image generation mechanism by analyzing and applying key objects and attributes as an informal natural language requirement analysis. Identify morpheme and semantic roles by analyzing natural language through linguistic methods. Based on the analysis results, a sequence diagram and an image are generated through the diagram. We expect consistent image generation using the same image element asset through the proposed mechanism.

Guided Sequence Generation using Trie-based Dictionary for ASR Error Correction (음성 인식 오류 수정을 위한 Trie 기반 사전을 이용한 Guided Sequence Generation)

  • Choi, Junhwi;Ryu, Seonghan;Yu, Hwanjo;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.211-216
    • /
    • 2016
  • 현재 나오는 많은 음성 인식기가 대체로 높은 정확도를 가지고 있더라도, 음성 인식 오류는 여전히 빈번하게 발생한다. 음성 인식 오류는 관련 어플리케이션에 있어 많은 오동작의 원인이 되므로, 음성 인식 오류는 고쳐져야 한다. 본 논문에서는 Trie 기반 사전을 이용한 Guided Sequence Generation을 제안한다. 제안하는 모델은 목표 단어와 그 단어의 문맥을 Encoding하고, 그로부터 단어를 Character 단위로 Decoding하며 단어를 Generation한다. 올바른 단어를 생성하기 위하여, Generation 시에 Trie 기반 사전을 통해 유도한다. 실험을 위해 모델은 영어 TV 가이드 도메인의 말뭉치의 음성 인식 오류를 단순히 Simulation하여 만들어진 말뭉치로부터 훈련되고, 같은 도메인의 음성 인식 문장과 결과로 이루어진 병렬 말뭉치에서 성능을 평가하였다. Guided Generation은 Unguided Generation에 비해 14.9% 정도의 오류를 줄였다.

  • PDF

A Study on the Form-Language in Product Design -Focus on the Example of the Study from Industrial Designer- (제품디자인의 조형언어에 대한 연구 -산업디자이너의 연구사례를 중심으로-)

  • 정충모;이재용
    • Archives of design research
    • /
    • v.16 no.2
    • /
    • pp.243-254
    • /
    • 2003
  • This research, we will begin our analysis of the types of research example of designers by the linguistic studies of products form. Specially, we had a mind to generate the linguistic concept of products form by the linguistic relations between tools and thinking, and analysed how activate the roles of form language in linguistic and non-linguistic areas. We understood the relations of design process about idea generation, same means of interpretation of form and the generation of form concept, by using of the roles of form language made in design process. We showed the research examples of Enzo Mari and the origins of form language in view point of design history. Also, we classified the form linguistic concerns of designers and scholars interested in design areas by dividing various factors in view point of language. Finally, through the process of this classification, the researches of form language call for further study, as more the examples of detailed design practises and the individualized form language of products than systematic researches and abstract theories about products, and emphasized this viewpoint, we suggest a going-on research theme, the individualized products' differences of form language and concerns viewpoint of inter-culture, the concept generation of form language of products can be inter-coexistent.

  • PDF

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.

Generation of Class MetaData Based on XMI (XMI기반 클래스의 메타데이터생성)

  • Lee, Sang-Sik;Choi, Han-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.572-581
    • /
    • 2009
  • Study on the class using XMI Meta model and XML MetaDats has significant difference from the method of Data creation which is widely used. Most of MXL System are focusing on the editor funcition, Database connection and Generation of Markup language. Unlikelly, however, this study has focused on the creation of Markup language of Class MetaData which are extracted from MXI data modedl. In addition to that, the attribute of unit element within the class and the relationship between the classes within the model were set to be given and expressed respectively. For the generation of Markup language, XML schema was used to declare the detail data type.

Instruction Tuning for Controlled Text Generation in Korean Language Model (Instruction Tuning을 통한 한국어 언어 모델 문장 생성 제어)

  • Jinhee Jang;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.289-294
    • /
    • 2023
  • 대형 언어 모델(Large Language Model)은 방대한 데이터와 파라미터를 기반으로 문맥 이해에서 높은 성능을 달성하였지만, Human Alignment를 위한 문장 생성 제어 연구는 아직 활발한 도전 과제로 남아있다. 본 논문에서는 Instruction Tuning을 통한 문장 생성 제어 실험을 진행한다. 자연어 처리 도구를 사용하여 단일 혹은 다중 제약 조건을 포함하는 Instruction 데이터 셋을 자동으로 구축하고 한국어 언어 모델인 Polyglot-Ko 모델에 fine-tuning 하여 모델 생성이 제약 조건을 만족하는지 검증하였다. 실험 결과 4개의 제약 조건에 대해 평균 0.88의 accuracy를 보이며 효과적인 문장 생성 제어가 가능함을 확인하였다.

  • PDF

Sign Language Generation with Animation by Adverbial Phrase Analysis (부사어를 활용한 수화 애니메이션 생성)

  • Kim, Sang-Ha;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.27-32
    • /
    • 2008
  • Sign languages, commonly used in aurally challenged communities, are a kind of visual language expressing sign words with motion. Spatiality and motility of a sign language are conveyed mainly via sign words as predicates. A predicate is modified by an adverbial phrase with an accompanying change in its semantics so that the adverbial phrase can also affect the overall spatiality and motility of expressions of a sign language. In this paper, we analyze the semantic features of adverbial phrases which may affect the motion-related semantics of a predicate in converting expressions in Korean into those in a sign language and propose a system that generates corresponding animation by utilizing these features.

  • PDF