• Title/Summary/Keyword: unknown word processing

Search Result 17, Processing Time 0.029 seconds

Distributed Representation of Words with Semantic Hierarchical Information (의미적 계층정보를 반영한 단어의 분산 표현)

  • Kim, Minho;Choi, Sungki;Kwon, Hyuk-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.941-944
    • /
    • 2017
  • 심층 학습에 기반을 둔 통계적 언어모형에서 가장 중요한 작업은 단어의 분산 표현(Distributed Representation)이다. 단어의 분산 표현은 단어 자체가 가지는 의미를 다차원 공간에서 벡터로 표현하는 것으로서, 워드 임베딩(word embedding)이라고도 한다. 워드 임베딩을 이용한 심층 학습 기반 통계적 언어모형은 전통적인 통계적 언어모형과 비교하여 성능이 우수한 것으로 알려져 있다. 그러나 워드 임베딩 역시 자료 부족분제에서 벗어날 수 없다. 특히 학습데이터에 나타나지 않은 단어(unknown word)를 처리하는 것이 중요하다. 본 논문에서는 고품질 한국어 워드 임베딩을 위하여 단어의 의미적 계층정보를 이용한 워드 임베딩 방법을 제안한다. 기존연구에서 제안한 워드 임베딩 방법을 그대로 활용하되, 학습 단계에서 목적함수가 입력 단어의 하위어, 동의어를 반영하여 계산될 수 있도록 수정함으로써 단어의 의미적 계층청보를 반영할 수 있다. 본 논문에서 제안한 워드 임베딩 방법을 통해 생성된 단어 벡터의 유추검사(analog reasoning) 결과, 기존 방법보다 5%가 증가한 47.90%를 달성할 수 있었다.

A Method for Unknown-Word Extraction from Korean Text (한국어 구문 분석기를 이용한 지명 추정 시스템 설계 및 구현)

  • Lee, Hyun-Suk;Ha, You-Sun;Kim, Tae-Hyun;Lee, Mann-Ho;Myaeng, Sung-Hyon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.383-386
    • /
    • 2000
  • 본 논문에서는 학습데이터를 이용하여 텍스트로부터 미등록 고유명사를 추정하는 방법을 제안한다. 고유명사 추정을 위해 먼저 형태소 분석기를 이용하여 품사가 명사인 단어들을 후보단어로 선택한다. 이렇게 선택된 후보단어가 고유명사인지 추정해 보기 위해 학습데이터를 이용하여 구성한 정보집합을 사용한다. 이러한 정보집합으로는 이름집합, 접미사집합, 단서집합, 배제어 집합이 있다. 본 논문에서는 이런 정보를 이용하여 한국어 지명을 추정하는 시스템을 구현하여 실험한 결과 77.2%의 정확도와 84.9%의 재현율을 보였다.

  • PDF

Web Attack Classification Model Based on Payload Embedding Pre-Training (페이로드 임베딩 사전학습 기반의 웹 공격 분류 모델)

  • Kim, Yeonsu;Ko, Younghun;Euom, Ieckchae;Kim, Kyungbaek
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.669-677
    • /
    • 2020
  • As the number of Internet users exploded, attacks on the web increased. In addition, the attack patterns have been diversified to bypass existing defense techniques. Traditional web firewalls are difficult to detect attacks of unknown patterns.Therefore, the method of detecting abnormal behavior by artificial intelligence has been studied as an alternative. Specifically, attempts have been made to apply natural language processing techniques because the type of script or query being exploited consists of text. However, because there are many unknown words in scripts and queries, natural language processing requires a different approach. In this paper, we propose a new classification model which uses byte pair encoding (BPE) technology to learn the embedding vector, that is often used for web attack payloads, and uses an attention mechanism-based Bi-GRU neural network to extract a set of tokens that learn their order and importance. For major web attacks such as SQL injection, cross-site scripting, and command injection attacks, the accuracy of the proposed classification method is about 0.9990 and its accuracy outperforms the model suggested in the previous study.

Practical Development and Application of a Korean Morphological Analyzer for Automatic Indexing (자동 색인을 위한 한국어 형태소 분석기의 실제적인 구현 및 적용)

  • Choi, Sung-Pil;Seo, Jerry;Chae, Young-Suk
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.689-700
    • /
    • 2002
  • In this paper, we developed Korean Morphological Analyzer for an automatic indexing that is essential for Information Retrieval. Since it is important to index large-scaled document set efficiently, we concentrated on maximizing the speed of word analysis, modularization and structuralization of the system without new concepts or ideas. In this respect, our system is characterized in terms of software engineering aspect to be used in real world rather than theoretical issues. First, a dictionary of words was structured. Then modules that analyze substantive words and inflected words were introduced. Furthermore numeral analyzer was developed. And we introduced an unknown word analyzer using the patterns of morpheme. This whole system was integrated into K-2000, an information retrieval system.

A Reverse Segmentation Algorithm of Compound Nouns (복합명사의 역방향 분해 알고리즘)

  • Lee, Hyeon-Min;Park, Hyeok-Ro
    • The KIPS Transactions:PartB
    • /
    • v.8B no.4
    • /
    • pp.357-364
    • /
    • 2001
  • 본 논문에서는 단위명사 사전과 접사 사전을 이용하여 한국어 복합명사를 분해하는 새로운 알고리즘을 제안한다. 한국어 복합명사는 그 구조에 있어서 중심어가 뒤에 나타난다는 점에 착안하여 본 논문에서 제안한 분해 알고리즘은 복합명사를 끝음절에서 첫음절 방향 즉 역방향으로 분해를 시도한다. ETRI의 태깅된 코퍼스로부터 추출한 복합명사 3,230개에 대해 실험한 결과 약 96.6%의 분해 정확도를 얻었다. 미등록어를 포함한 복합명사의 경우는 77.5%의 분해 정확도를 나타냈다. 실험에 사용된 데이터중의 미등록어는 대부분 접사를 포함한 파행어로서, 제안한 복합명사 분해 알고리즘은 접사가 부착된 미등록어 분석에 있어서 보다 높은 분석 정확도를 나타냄을 알 수 있었다.

  • PDF

A Study-on Context-Dependent Acoustic Models to Improve the Performance of the Korea Speech Recognition (한국어 음성인식 성능향상을 위한 문맥의존 음향모델에 관한 연구)

  • 황철준;오세진;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.9-15
    • /
    • 2001
  • In this paper we investigate context dependent acoustic models to improve the performance of the Korean speech recognition . The algorithm are using the Korean phonological rules and decision tree, By Successive State Splitting(SSS) algorithm the Hidden Merkov Netwwork(HM-Net) which is an efficient representation of phoneme-context-dependent HMMs, can be generated automatically SSS is powerful technique to design topologies of tied-state HMMs but it doesn't treat unknown contexts in the training phoneme contexts environment adequately In addition it has some problem in the procedure of the contextual domain. In this paper we adopt a new state-clustering algorithm of SSS, called Phonetic Decision Tree-based SSS (PDT-SSS) which includes contexts splits based on the Korean phonological rules. This method combines advantages of both the decision tree clustering and SSS, and can generated highly accurate HM-Net that can express any contexts To verify the effectiveness of the adopted methods. the experiments are carried out using KLE 452 word database and YNU 200 sentence database. Through the Korean phoneme word and sentence recognition experiments. we proved that the new state-clustering algorithm produce better phoneme, word and continuous speech recognition accuracy than the conventional HMMs.

  • PDF

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.