• Title/Summary/Keyword: multi-language

Search Result 682, Processing Time 0.15 seconds

Multi-level Morphology and Morphological Analysis Model for Korean (다층 형태론과 한국어 형태소 분석 모델)

  • Kang, Seung-Shik
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.140-145
    • /
    • 1994
  • 형태소 분석은 단위 형태소를 분리한 후에 변형이 일어난 형태소의 원형을 복원하고, 분리된 단위 형태소들로부터 단어 형성 규칙에 맞는 연속된 형태소들을 구하는 과정이다. 이러한 일련의 분석 과정은 독립적인 특성이 강하면서 각 모듈이 서로 밀접하게 연관되어 있으므로 Two-level 모델에서는 형태론적 변형뿐만 아니라 형태소 분리 문제를 통합 규칙으로 처리하고 있다. 그러나 한국어에 Two-level 모델을 적응해 보면 형태소 분리와 형태론적 변형이 복합되어 있어서 교착어의 특성과 관계되는 단어 유형을 분석할 때 비효율적인 요소가 발견된다. 따라서 본 논문에서는 교착어인 한국어의 형태소 분석시에 발생하는 문제점들을 해결하는데 적합한 방법론으로 다층 형태론(multi-level morphology)과 다단계 모델(multi-level model)을 제안한다.

  • PDF

Challenging a Single-Factor Analysis of Case Drop in Korean

  • Chung, Eun Seon
    • Language and Information
    • /
    • v.19 no.1
    • /
    • pp.1-18
    • /
    • 2015
  • Korean marks case for subjects and objects, but it is well known that case-markers can be dropped in certain contexts. Kwon and Zribi-Hertz (2008) establishes the phenomenon of Korean case drop on a single factor of f(ocus)-structure visibility and claims that both subject and object case drop can fall under a single linguistic generalization of information structure. However, the supporting data is not empirically substantiated and the tenability of the f-structure analysis is still under question. In this paper, an experiment was conducted to show that the specific claims of Kwon and Zribi-Hertz's analysis that places exclusive importance on information structure cannot be adequately supported by empirical evidence. In addition, the present study examines H. Lee's (2006a, 2006c) multi-factor analysis of object case drop and investigates whether this approach can subsume both subject and object case drop under a unified analysis. The present findings indicate that the multi-factor analysis that involves the interaction of independent factors (Focus, Animacy, and Definiteness) is also compatible with subject case drop, and that judgments on case drop are not categorical but form gradient statistical preferences.

  • PDF

Personalized Multi-Turn Chatbot Based on Dual WGAN (Dual WGAN 기반 페르소나 Multi-Turn 챗봇)

  • Oh, Shinhyeok;Kim, JinTae;Kim, Harksoo;Lee, Jeong-Eom;Kim, Seona;Park, Youngmin;Noh, Myungho
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.49-53
    • /
    • 2019
  • 챗봇은 사람과 컴퓨터가 자연어로 대화를 주고받는 시스템을 말한다. 최근 챗봇에 대한 연구가 활발해지면서 단순히 기계적인 응답보다 사용자가 원하는 개인 특성이 반영된 챗봇에 대한 연구도 많아지고 있다. 기존 연구는 하나의 벡터를 사용하여 한 가지 형태의 페르소나 정보를 모델에 반영했다. 하지만, 페르소나는 한 가지 형태로 정의할 수 없어서 챗봇 모델에 페르소나 정보를 다양한 형태로 반영시키는 연구가 필요하다. 따라서, 본 논문은 최신 생성 기반 Multi-Turn 챗봇 시스템을 기반으로 챗봇이 다양한 형태로 페르소나를 반영하게 하는 방법을 제안한다.

  • PDF

Pretraining Dense retrieval for Multi-hop question answering of Korean (한국어 다중추론 질의응답을 위한 Dense Retrieval 사전학습)

  • Kang, Dong-Chan;Na, Seung-Hoon;Kim, Tae-Hyeong;Choi, Yun-Su;Chang, Du-Seong
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.588-591
    • /
    • 2021
  • 다중추론 질의응답 태스크는 하나의 문서만 필요한 기존의 단일추론 질의응답(Single-hop QA)을 넘어서 복잡한 추론을 요구하는 질문에 응답하는 것이 목표이다. IRQA에서는 검색 모델의 역할이 중요한 반면, 주목받고 있는 Dense Retrieval 모델 기반의 다중추론 질의응답 검색 모델은 찾기 어렵다. 본 논문에서는 검색분야에서 좋은 성능 보이고 있는 Dense Retrieval 모델의 다중추론을 위한 사전학습 방법을 제안하고 관련 한국어 데이터 셋에서 이전 방법과의 성능을 비교 측정하여 학습 방법의 유효성을 검증하고 있다. 이를 통해 지식 베이스, 엔터티 링킹, 개체명 인식모듈을 비롯한 다른 서브모듈을 사용하지 않고도 다중추론 Dense Retrieval 모델을 학습시킬 수 있음을 보였다.

  • PDF

Chinese Multi-domain Task-oriented Dialogue System based on Paddle (Paddle 기반의 중국어 Multi-domain Task-oriented 대화 시스템)

  • Deng, Yuchen;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.308-310
    • /
    • 2022
  • With the rise of the Al wave, task-oriented dialogue systems have become one of the popular research directions in academia and industry. Currently, task-oriented dialogue systems mainly adopt pipelined form, which mainly includes natural language understanding, dialogue state decision making, dialogue state tracking and natural language generation. However, pipelining is prone to error propagation, so many task-oriented dialogue systems in the market are only for single-round dialogues. Usually single- domain dialogues have relatively accurate semantic understanding, while they tend to perform poorly on multi-domain, multi-round dialogue datasets. To solve these issues, we developed a paddle-based multi-domain task-oriented Chinese dialogue system. It is based on NEZHA-base pre-training model and CrossWOZ dataset, and uses intention recognition module, dichotomous slot recognition module and NER recognition module to do DST and generate replies based on rules. Experiments show that the dialogue system not only makes good use of the context, but also effectively addresses long-term dependencies. In our approach, the DST of dialogue tracking state is improved, and our DST can identify multiple slotted key-value pairs involved in the discourse, which eliminates the need for manual tagging and thus greatly saves manpower.

Dialog-based multi-item recommendation using automatic evaluation

  • Euisok Chung;Hyun Woo Kim;Byunghyun Yoo;Ran Han;Jeongmin Yang;Hwa Jeon Song
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.277-289
    • /
    • 2024
  • In this paper, we describe a neural network-based application that recommends multiple items using dialog context input and simultaneously outputs a response sentence. Further, we describe a multi-item recommendation by specifying it as a set of clothing recommendations. For this, a multimodal fusion approach that can process both cloth-related text and images is required. We also examine achieving the requirements of downstream models using a pretrained language model. Moreover, we propose a gate-based multimodal fusion and multiprompt learning based on a pretrained language model. Specifically, we propose an automatic evaluation technique to solve the one-to-many mapping problem of multi-item recommendations. A fashion-domain multimodal dataset based on Koreans is constructed and tested. Various experimental environment settings are verified using an automatic evaluation method. The results show that our proposed method can be used to obtain confidence scores for multi-item recommendation results, which is different from traditional accuracy evaluation.

A Harmony in Language and Music (언어와 음악의 상관관계 고찰을 위한 연구)

  • 이재강
    • Lingua Humanitatis
    • /
    • v.2 no.1
    • /
    • pp.287-301
    • /
    • 2002
  • Either in music or in language, sound plays its role by taking up the fixed multi-spaces in one's consciousness. Music space differs from auditory space whose aim Is to perceive the positions and identities of the outer things. While auditory space is based on the interests of the outer things, music space is based on the indifference. We discuss the notion of space because it is where symbols reside. Categorial perception about the phonemic restoration describes the ability of a listener how to use his own intelligence to acknowledge and fill the missing points; however, musical perception can be explained as a positive regression to avoid colloquial logic and danger of segmentation in the course of auditory experience and phonation acquisition by an infant. About the question on the difference of the listening to the language sound and other sound, auditory mechanism proceeds language sound the same as other types of sound. But there are another theories which claim that brain proceeds the farmer differently from the latter. The function of music has not been discovered as clear as that of language; music has much more meanings in comparison with language.

  • PDF

A study on Implementation of English Sentence Generator using Lexical Functions (언어함수를 이용한 영문 생성기의 구현에 관한 연구)

  • 정희연;김희연;이웅재
    • Journal of Internet Computing and Services
    • /
    • v.1 no.2
    • /
    • pp.49-59
    • /
    • 2000
  • The majority of work done to date on natural language processing has focused on analysis and understanding of language, thus natural language generation had been relatively less attention than understanding, And people even tends to regard natural language generation CIS a simple reverse process of language understanding, However, need for natural language generation is growing rapidly as application systems, especially multi-language machine translation systems on the web, natural language interface systems, natural language query systems need more complex messages to generate, In this paper, we propose an algorithm to generate more flexible and natural sentence using lexical functions of Igor Mel'uk (Mel'uk & Zholkovsky, 1988) and systemic grammar.

  • PDF

A Secure Multiagent Engine Based on Public Key Infrastructure (공개키 기반 구조 기반의 보안 다중 에이전트 엔진)

  • 장혜진
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.3 no.4
    • /
    • pp.313-318
    • /
    • 2002
  • The Integration of agent technology and security technology is needed to many application areas like electronic commerce. This paper suggests a model of extended multi-agent engine which supports privacy, integrity, authentication and non-repudiation on agent communication. Each agent which is developed with the agent engine is composed of agent engine layer and agent application layer. We describe and use the concepts self-to-self messages, secure communication channel, and distinction of KQML messages in agent application layer and messages in agent engine layer. The suggested agent engine provides an agent communication language which is extended to enable secure communication between agents without any modifications or restrictions to content layer and message layer of KQML. Also, in the model of our multi-agent engine, secure communication is expressed and processed transparently on the agent communication language.

  • PDF

Performance Evaluation of Pre-trained Language Models in Multi-Goal Conversational Recommender Systems (다중목표 대화형 추천시스템을 위한 사전 학습된 언어모델들에 대한 성능 평가)

  • Taeho Kim;Hyung-Jun Jang;Sang-Wook Kim
    • Smart Media Journal
    • /
    • v.12 no.6
    • /
    • pp.35-40
    • /
    • 2023
  • In this study paper, we examine pre-trained language models used in Multi-Goal Conversational Recommender Systems (MG-CRS), comparing and analyzing their performances of various pre-trained language models. Specifically, we investigates the impact of the sizes of language models on the performance of MG-CRS. The study targets three types of language models - of BERT, GPT2, and BART, and measures and compares their accuracy in two tasks of 'type prediction' and 'topic prediction' on the MG-CRS dataset, DuRecDial 2.0. Experimental results show that all models demonstrated excellent performance in the type prediction task, but there were notable provide significant performance differences in performance depending on among the models or based on their sizes in the topic prediction task. Based on these findings, the study provides directions for improving the performance of MG-CRS.