• Title/Summary/Keyword: language model

Search Result 2,777, Processing Time 0.032 seconds

A Experimental Study on the Usefulness of Structure Hints in the Leaf Node Language Model-Based XML Document Retrieval (단말노드 언어모델 기반의 XML문서검색에서 구조 제한의 유용성에 관한 실험적 연구)

  • Jung, Young-Mi
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.1 s.63
    • /
    • pp.209-226
    • /
    • 2007
  • XML documents format on the Web provides a mechanism to impose their content and logical structure information. Therefore, an XML processor provides access to their content and structure. The purpose of this study is to investigate the usefulness of structural hints in the leaf node language model-based XML document retrieval. In order to this purpose, this experiment tested the performances of the leaf node language model-based XML retrieval system to compare the queries for a topic containing only content-only constraints and both content constrains and structure constraints. A newly designed and implemented leaf node language model-based XML retrieval system was used. And we participated in the ad-hoc track of INEX 2005 and conducted an experiment using a large-scale XML test collection provided by INEX 2005.

Range Detection of Wa/Kwa Parallel Noun Phrase by Alignment method (정렬기법을 활용한 와/과 병렬명사구 범위 결정)

  • Choe, Yong-Seok;Sin, Ji-Ae;Choe, Gi-Seon;Kim, Gi-Tae;Lee, Sang-Tae
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2008.10a
    • /
    • pp.90-93
    • /
    • 2008
  • In natural language, it is common that repetitive constituents in an expression are to be left out and it is necessary to figure out the constituents omitted at analyzing the meaning of the sentence. This paper is on recognition of boundaries of parallel noun phrases by figuring out constituents omitted. Recognition of parallel noun phrases can greatly reduce complexity at the phase of sentence parsing. Moreover, in natural language information retrieval, recognition of noun with modifiers can play an important role in making indexes. We propose an unsupervised probabilistic model that identifies parallel cores as well as boundaries of parallel noun phrases conjoined by a conjunctive particle. It is based on the idea of swapping constituents, utilizing symmetry (two or more identical constituents are repeated) and reversibility (the order of constituents is changeable) in parallel structure. Semantic features of the modifiers around parallel noun phrase, are also used the probabilistic swapping model. The model is language-independent and in this paper presented on parallel noun phrases in Korean language. Experiment shows that our probabilistic model outperforms symmetry-based model and supervised machine learning based approaches.

  • PDF

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

A modeling of manufacturing system and a model analysis by a SIMAN language (생산공정의 모델링과 SIMAN 언어에 의한 모델분석)

  • 이만형;김경천;한성현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.300-306
    • /
    • 1987
  • This paper deals with a modeling of manufacturing system and a model analysis by a SIMAN language. A flow of production process is analyzed, and a mathematical model on the basis of the analyzed data is simulated by a SIMAN language. An object of this study is to achieve an optimization of production a reduction of cost, and an improvement of quality by a applicable line-balancing technique and an optimization technique in a real factor induced an analysis and synthesis of the result of simulation.

  • PDF

A Training Feasibility Evaluation of Nuclear Safeguards Terms for the Large Language Model (LLM) (거대언어모델에 대한 원자력 안전조치 용어 적용 가능성 평가)

  • Sung-Ho Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.479-480
    • /
    • 2024
  • 본 논문에서는 원자력 안전조치 용어를 미세조정(fine tuning) 알고리즘을 활용해 추가 학습한 공개 거대 언어모델(Large Language Model, LLM)이 안전조치 관련 질문에 대해 답변한 결과를 정성적으로 평가하였다. 평가 결과, 학습 데이터 범위 내 질문에 대해 학습 모델은 기반 모델 답변에 추가 학습 데이터를 활용한 낮은 수준의 추론을 수행한 답변을 출력하였다. 평가 결과를 통해 추가 학습 개선 방향을 도출하였으며 저비용 전문 분야 언어 모델 구축에 활용할 수 있을 것으로 보인다.

  • PDF

Safety of Large Language Model-Tool Integration (거대 언어 모델 (Large Language Model, LLM)과 도구 결합의 보안성 연구)

  • Juhee Kim;Byoungyoung Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.210-213
    • /
    • 2024
  • 이 연구는 거대한 언어 모델 (Large Language Model, LLM)과 도구를 결합한 시스템의 보안 문제를 다룬다. 프롬프트 주입과 같은 보안 취약점을 분석하고 이를 극복하기 위한 프롬프트 권한 분리 기법을 제안한다. 이를 통해 LLM-도구 결합 시스템에서의 사용자 데이터의 기밀성과 무결성을 보장한다.

Software Model Integration Using Metadata Model Based on Linked Data (Linked Data 기반의 메타데이타 모델을 활용한 소프트웨어 모델 통합)

  • Kim, Dae-Hwan;Jeong, Chan-Ki
    • Journal of Information Technology Services
    • /
    • v.12 no.3
    • /
    • pp.311-321
    • /
    • 2013
  • In the community of software engineering, diverse modeling languages are used for representing all relevant information in the form of models. Also many different models such as business model, business process model, product models, interface models etc. are generated through software life cycles. In this situation, models need to be integrated for enterprise integration and enhancement of software productivity. Researchers propose rebuilding models by a specific modeling language, using a intemediate modeling language and using common reference for model integration. However, in the current approach it requires a lot of cost and time to integrate models. Also it is difficult to identify common objects from several models and to update objects in the repository of common model objects. This paper proposes software model integration using metadata model based on Linked data. We verify the effectiveness of the proposed approach through a case study.

Natural-Language-Based Robot Action Control Using a Hierarchical Behavior Model

  • Ahn, Hyunsik;Ko, Hyun-Bum
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.192-200
    • /
    • 2012
  • In order for humans and robots to interact in daily life, robots need to understand human speech and link it to their actions. This paper proposes a hierarchical behavior model for robot action control using natural language commands. The model, which consists of episodes, primitive actions and atomic functions, uses a sentential cognitive system that includes multiple modules for perception, action, reasoning and memory. Human speech commands are translated to sentences with a natural language processor that are syntactically parsed. A semantic parsing procedure was applied to human speech by analyzing the verbs and phrases of the sentences and linking them to the cognitive information. The cognitive system performed according to the hierarchical behavior model, which consists of episodes, primitive actions and atomic functions, which are implemented in the system. In the experiments, a possible episode, "Water the pot," was tested and its feasibility was evaluated.

  • PDF

An Experimental Study on the Performance of Element-based XML Document Retrieval (엘리먼트 기반 XML 문서검색의 성능에 관한 실험적 연구)

  • Yoon, So-Young;Moon, Sung-Been
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.1 s.59
    • /
    • pp.201-219
    • /
    • 2006
  • This experimental study suggests an element-based XML document retrieval method that reveals highly relevant elements. The models investigated here for comparison are divergence and smoothing method, and hierarchical language model. In conclusion, the hierarchical language model proved to be most effective in element-based XML document retrieval with regard to the improved exhaustivity and harmed specificity.