• Title/Summary/Keyword: Linguistic processing

Search Result 171, Processing Time 0.021 seconds

Algorithmic approach for handling linguistic values (언어 값을 다루기 위한 알고리즘적인 접근법)

  • Choi Dae Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.2 s.98
    • /
    • pp.203-208
    • /
    • 2005
  • We propose an algorithmic approach for handling linguistic values defined in the same linguistic variable. Using the proposed approach, we can explicitly capture the differences of individuals' subjectivity with respect to linguistic values defined in the same linguistic variable. The proposed approach can be employed as a useful tool for discovering hidden relationship among linguistic values defined in the same linguistic variable. Consequently, it provides a basis for improving the precision of knowledge acquisition in the development of fuzzy systems including fuzzy expert systems, fuzzy decision tree, fuzzy cognitive map, ok. In this paper, we apply the proposed approach to a collective linguistic assessment among multiple experts.

Scalable Deep Linguistic Processing: Mind the Lexical Gap

  • Baldwin, Timothy
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.3-12
    • /
    • 2007
  • Coverage has been a constant thorn in the side of deployed deep linguistic processing applications, largely because of the difficulty in constructing, maintaining and domaintuning the complex lexicons that they rely on. This paper reviews various strands of research on deep lexical acquisition (DLA), i.e. the (semi-)automatic creation of linguistically-rich language resources, particularly from the viewpoint of DLA for precision grammars.

  • PDF

Linguistic Processing in Automatic Interpretation System between English-Korean Language Pair

  • Choi, K.S.;Lee, S.M.;Lee, Y.J.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1076-1081
    • /
    • 1994
  • This paper presents the linguistic processing for the Automatic Interpretation system between English/Korean language pair. We introduce two machine translation systems, each for English-to-Korean and Korean-to-English, describe the system configuration and several characteristics, and discuss the translation evaluation results.

  • PDF

Construction of Korean Linguistic Information for the Korean Generation on KANT (Kant 시스템에서의 한국어 생성을 위한 언어 정보의 구축)

  • Yoon, Deok-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3539-3547
    • /
    • 1999
  • Korean linguistic information for the generation modulo of KANT(Knowledge-based Accurate Natural language Translation) system was constructed. As KANT has a language-independent generation engine, the construction of Korean linguistic information means the development of the Korean generation module. Constructed information includes concept-based mapping rules, category-based mapping rules, syntactic lexicon, template rules, grammar rules based on the unification grammar, lexical rules and rewriting rules for Korean. With these information in sentences were successfully and completely generated from the interlingua functional structures among the 118 test set prepared by the developers of KANT system.

  • PDF

Comparison of Classification Performance Between Adult and Elderly Using Acoustic and Linguistic Features from Spontaneous Speech (자유대화의 음향적 특징 및 언어적 특징 기반의 성인과 노인 분류 성능 비교)

  • SeungHoon Han;Byung Ok Kang;Sunghee Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.365-370
    • /
    • 2023
  • This paper aims to compare the performance of speech data classification into two groups, adult and elderly, based on the acoustic and linguistic characteristics that change due to aging, such as changes in respiratory patterns, phonation, pitch, frequency, and language expression ability. For acoustic features we used attributes related to the frequency, amplitude, and spectrum of speech voices. As for linguistic features, we extracted hidden state vector representations containing contextual information from the transcription of speech utterances using KoBERT, a Korean pre-trained language model that has shown excellent performance in natural language processing tasks. The classification performance of each model trained based on acoustic and linguistic features was evaluated, and the F1 scores of each model for the two classes, adult and elderly, were examined after address the class imbalance problem by down-sampling. The experimental results showed that using linguistic features provided better performance for classifying adult and elderly than using acoustic features, and even when the class proportions were equal, the classification performance for adult was higher than that for elderly.

Design Specification of an XML Toolkit (XML Toolkit 설계)

  • Roh, Dae-Sik;Kim, Hyun-Ki;Yun, Bo-Hyun;Kang, Hyun-Kyu
    • Annual Conference of KIPS
    • /
    • 2000.04a
    • /
    • pp.480-482
    • /
    • 2000
  • XML은 기존 HTML의 한계를 극복할 수 있는 새로운 기술로 다양한 응용분야에 활용되고 있으며 많은 응용 제품들이 개발되고 있다. XML편집기, XSL편집기, XML브라우져, XML저장 관리기, XML문서 저장 관리기, XML문서 검색기, XML Conversion Tool등의 다양한 XML응용 프로그램에서 사용할 수 있는 표준 라이브러리 API인 DOM(Document Object Model)과 SAX(Simple API for XML)를 지원하며 XML문서의 모든 구성요소에 대한 처리를 할 수 있는 파서가 요구되고 있다. 이에 본 논문에서는 다양한 응용 프로그램의 요구사항을 분석하고 이를 반영하여 처리할 수 있는 XML Toolkit모델을 제시한다. 본 XML Toolkit은 W3C XML 1.0스펙과 W3C Namespaces in XML스펙과 W3C DOM Level 1스펙을 지원하며 XML사용자 그룹에서 정의한 SAX를 지원한다. 또한 표준 API로 접근할 수 없거나 그 기능이 표준에서 정의되지 않은 추가 기능을 제공하기 위한 XML문서의 내부자료구조를 정의하고 이의 처리를 위한 API를 제공한다.

  • PDF

A Linguistic Case-based Fuzzy Reasoning based on SPMF (표준화된 매개변수 소속함수에 기반을 둔 언어적 케이스 기반 퍼지 추론)

  • Choi, Dae-Young
    • The KIPS Transactions:PartB
    • /
    • v.17B no.2
    • /
    • pp.163-168
    • /
    • 2010
  • A linguistic case-based fuzzy reasoning (LCBFR) based on standardized parametric membership functions (SPMF) is proposed. It provides an efficient mechanism for a fuzzy reasoning within linear time complexity. Thus, it can be used to improve the speed of fuzzy reasoning. In the process of LCBFR, linguistic case indexing and retrieval based on SPMF is suggested. It can be processed relatively fast compared to the previous linguistic approximation methods. From the engineering viewpoint, it may be a valuable advantage.

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

Logic-based Fuzzy Neural Networks based on Fuzzy Granulation

  • Kwak, Keun-Chang;Kim, Dong-Hwa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1510-1515
    • /
    • 2005
  • This paper is concerned with a Logic-based Fuzzy Neural Networks (LFNN) with the aid of fuzzy granulation. As the underlying design tool guiding the development of the proposed LFNN, we concentrate on the context-based fuzzy clustering which builds information granules in the form of linguistic contexts as well as OR fuzzy neuron which is logic-driven processing unit realizing the composition operations of T-norm and S-norm. The design process comprises several main phases such as (a) defining context fuzzy sets in the output space, (b) completing context-based fuzzy clustering in each context, (c) aggregating OR fuzzy neuron into linguistic models, and (c) optimizing connections linking information granules and fuzzy neurons in the input and output spaces. The experimental examples are tested through two-dimensional nonlinear function. The obtained results reveal that the proposed model yields better performance in comparison with conventional linguistic model and other approaches.

  • PDF