• Title/Summary/Keyword: language of science

Search Result 3,677, Processing Time 0.039 seconds

Information Retrieval Systems: Between Morphological Analyzers and Systemming Algorithms

  • Mohamed, Afaf Abdel Rhman;Ouni, Chafika;Eljack, Sarah Mustafa;Alfayez, Fayez
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.375-381
    • /
    • 2022
  • The main objective of an Information Retrieval System (IRS) is to obtain suitable information within a reasonable time to satisfy a user need. To achieve this purpose, an IRS should have a good indexing system that is based on natural language processing.In this context, we focus on the available Arabic language processing techniques for an IRS with the goal of contributing to an improvement in the performance. Our contribution consists of integrating morphological analysis into an IRS in order to compare the impact of morphological analysis with that of stemming algorithms.

Design and Implementation of Field Classification and Information Retrieval Engine;JULSE (검색과 분류가 동시에 가능한 JULSE 시스템의 설계 및 구현)

  • Jang, Jeong-Hyo;Son, Ju-Sung;Kim, Do-Yun;Lee, Sang-Kon;Lee, Won-Hee;Ahn, Dong-Un
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.673-676
    • /
    • 2005
  • 기존의 정보검색 엔진은 문서의 분야에 상관없이 본문 전체의 내용을 보여주므로 사용자가 적합한 내용인지를 파악하기 위해서는 본문 전체를 읽어 보아야 그 적절성 여부를 알 수 있다. 본 논문에서 제안하는 방법은 질의어가 지시하는 분야를 분야연상어를 이용하여 자동으로 파악하고, 사용자가 원하는 분야에서의 검색이 이루어지도록 하는 검색과 분류가 동시에 가능한 엔진을 설계하여 검색결과의 성능을 향상하고자 한다. 이와 함께 적당한 분야연상어가 다수 출현한 단락을 사용자에게 제공하여 본문 전체를 보지 않아도 질의어에 적당한 문서인지를 빠르게 파악하도록 설계하여 구현하였다.

  • PDF

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Reducing Toxic Response Generation in Conversational Models using Plug and Play Language Model (Plug and Play Language Model을 활용한 대화 모델의 독성 응답 생성 감소)

  • Kim, Byeong-Joo;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.433-438
    • /
    • 2021
  • 대화 시스템은 크게 사용자와 시스템이 특정 목적 혹은 자유 주제에 대해 대화를 진행하는 것으로 구분된다. 최근 자유주제 대화 시스템(Open-Domain Dialogue System)에 대한 연구가 활발히 진행됨에 따라 자유 주제를 기반으로 하는 상담 대화, 일상 대화 시스템의 독성 발화 제어 생성에 대한 연구의 중요성이 더욱 커지고 있다. 이에 본 논문에서는 대화 모델의 독성 응답 생성을 제어하기 위해 일상 대화 데이터셋으로 학습된 BART 모델에 Plug-and-Play Language Model 방법을 적용한다. 공개된 독성 대화 분류 데이터셋으로 학습된 독성 응답 분류기를 PPLM의 어트리뷰트(Attribute) 모델로 활용하여 대화 모델의 독성 응답 생성을 감소시키고 그 차이를 실험을 통해 정량적으로 비교한다. 실험 결과 어트리뷰트 모델을 활용한 모든 실험에서 독성 응답 생성이 감소함을 확인하였다.

  • PDF

Development of a Sign Language Learning Assistance System using Mediapipe for Sign Language Education of Deaf-Mutility (청각장애인의 수어 교육을 위한 MediaPipe 활용 수어 학습 보조 시스템 개발)

  • Kim, Jin-Young;Sim, Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1355-1362
    • /
    • 2021
  • Recently, not only congenital hearing impairment, but also the number of people with hearing impairment due to acquired factors is increasing. The environment in which sign language can be learned is poor. Therefore, this study intends to present a sign language (sign language number/sign language text) evaluation system as a sign language learning assistance tool for sign language learners. Therefore, in this paper, sign language is captured as an image using OpenCV and Convolutional Neural Network (CNN). In addition, we study a system that recognizes sign language behavior using MediaPipe, converts the meaning of sign language into text-type data, and provides it to users. Through this, self-directed learning is possible so that learners who learn sign language can judge whether they are correct dez. Therefore, we develop a sign language learning assistance system that helps us learn sign language. The purpose is to propose a sign language learning assistance system as a way to support sign language learning, the main language of communication for the hearing impaired.

Hierarchical Hidden Markov Model for Finger Language Recognition (지화 인식을 위한 계층적 은닉 마코프 모델)

  • Kwon, Jae-Hong;Kim, Tae-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.77-85
    • /
    • 2015
  • The finger language is the part of the sign language, which is a language system that expresses vowels and consonants with hand gestures. Korean finger language has 31 gestures and each of them needs a lot of learning models for accurate recognition. If there exist mass learning models, it spends a lot of time to search. So a real-time awareness system concentrates on how to reduce search spaces. For solving these problems, this paper suggest a hierarchy HMM structure that reduces the exploration space effectively without decreasing recognition rate. The Korean finger language is divided into 3 categories according to the direction of a wrist, and a model can be searched within these categories. Pre-classification can discern a similar finger Korean language. And it makes a search space to be managed effectively. Therefore the proposed method can be applied on the real-time recognition system. Experimental results demonstrate that the proposed method can reduce the time about three times than general HMM recognition method.

A Semi-supervised Learning of HMM to Build a POS Tagger for a Low Resourced Language

  • Pattnaik, Sagarika;Nayak, Ajit Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.4
    • /
    • pp.207-215
    • /
    • 2020
  • Part of speech (POS) tagging is an indispensable part of major NLP models. Its progress can be perceived on number of languages around the globe especially with respect to European languages. But considering Indian Languages, it has not got a major breakthrough due lack of supporting tools and resources. Particularly for Odia language it has not marked its dominancy yet. With a motive to make the language Odia fit into different NLP operations, this paper makes an attempt to develop a POS tagger for the said language on a HMM (Hidden Markov Model) platform. The tagger judiciously considers bigram HMM with dynamic Viterbi algorithm to give an output annotated text with maximum accuracy. The model is experimented on a corpus belonging to tourism domain accounting to a size of approximately 0.2 million tokens. With the proportion of training and testing as 3:1, the proposed model exhibits satisfactory result irrespective of limited training size.

Natural Language Processing and Cognition (자연언어처리와 인지)

  • 이정민
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.2
    • /
    • pp.161-174
    • /
    • 1992
  • The present discussion is concerned with showing the development of natural language processing and how it is related to information and cognition.On the basis of the computeational model,in which humans are viewed as processors of linguistic structures that use stored knowledge-grammar, lexicon and structures representing the encyclopedic information of the world,such programs of natural language understanding as Winograd's SHRDLU came out.However,such pragmatic factors as contexts and the speaker's beliefs,internts,goals and intentions are not easy to process yet.Language,ingormation and cognition are argued to be closely interrelated,and the study of them,the paper argues,can lead to the development of science on general.

A Computational Model of Language Learning Driven by Training Inputs

  • Lee, Eun-Seok;Lee, Ji-Hoon;Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2010.05a
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF