• Title/Summary/Keyword: Probabilistic grammar

Search Result 14, Processing Time 0.02 seconds

A Model of Probabilistic Parsing Automata (확률파싱오토마타 모델)

  • Lee, Gyung-Ok
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.239-245
    • /
    • 2017
  • Probabilistic grammar is used in natural language processing, and the parse result of the grammar has to preserve the probability of the original grammar. As for the representative parsing method, LL parsing and LR parsing, the former preserves the probability information of the original grammar, but the latter does not. A characteristic of a probabilistic parsing automaton has been studied; but, currently, the generating model of probabilistic parsing automata has not been known. The paper provides a model of probabilistic parsing automata based on the single state parsing automata. The generated automaton preserves the probability of the original grammar, so it is not necessary to test whether or not the automaton is probabilistic parsing automaton; defining a probability function for the automaton is not required. Additionally, an efficient automaton can be constructed by choosing an appropriate parameter.

A Parser of Definitions in Korean Dictionary based on Probabilistic Grammar Rules (확률적 문법규칙에 기반한 국어사전의 뜻풀이말 구문분석기)

  • Lee, Su Gwang;Ok, Cheol Yeong
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.5
    • /
    • pp.448-448
    • /
    • 2001
  • The definitions in Korean dictionary not only describe meanings of title, but also include various semantic information such as hypernymy/hyponymy, meronymy/holonymy, polysemy, homonymy, synonymy, antonymy, and semantic features. This paper purposes to implement a parser as the basic tool to acquire automatically the semantic information from the definitions in Korean dictionary. For this purpose, first we constructed the part-of-speech tagged corpus and the tree tagged corpus from the definitions in Korean dictionary. And then we automatically extracted from the corpora the frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability based on the statistical method. The parser is a kind of the probabilistic chart parser that uses the extracted data. The frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability resolve the noun phrase's structural ambiguity during parsing. The parser uses a grammar factoring, Best-First search, and Viterbi search In order to reduce the number of nodes during parsing and to increase the performance. We experiment with grammar rule's probability, left-to-right parsing, and left-first search. By the experiments, when the parser uses grammar rule's probability and left-first search simultaneously, the result of parsing is most accurate and the recall is 51.74% and the precision is 87.47% on raw corpus.

Korean Probabilistic Dependency Grammar Induction by morpheme (형태소 단위의 한국어 확률 의존문법 학습)

  • Choi, Seon-Hwa;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.791-798
    • /
    • 2002
  • In this thesis. we present a new method for inducing a probabilistic dependency grammar (PDG) from text corpus. As words in Korean are composed of a set of more basic morphemes, there exist various dependency relations in a word. So, if the induction process does not take into account of these in-word dependency relations, the accuracy of the resulting grammar nay be poor. In comparison with previous PDG induction methods. the main difference of the proposed method lies in the fact that the method takes into account in-word dependency relations as well as inter-word dependency relations. To access the performance of the proposed method, we conducted an experiment using a manually-tagged corpus of 25,000 sentences which is complied by Korean Advanced Institute of Science and Technology (KAIST). The grammar induction produced 2,349 dependency rules. The parser with these dependency rules shoved 69.77% accuracy in terms of the number of correct dependency relations relative to the total number dependency relations for best-1 parse trees of sample sentences. The result shows that taking into account in-word dependency relations in the course of grammar induction results in a more accurate dependency grammar.

TG-SPSR: A Systematic Targeted Password Attacking Model

  • Zhang, Mengli;Zhang, Qihui;Liu, Wenfen;Hu, Xuexian;Wei, Jianghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2674-2697
    • /
    • 2019
  • Identity authentication is a crucial line of defense for network security, and passwords are still the mainstream of identity authentication. So far trawling password attacking has been extensively studied, but the research related with personal information is always sporadic. Probabilistic context-free grammar (PCFG) and Markov chain-based models perform greatly well in trawling guessing. In this paper we propose a systematic targeted attacking model based on structure partition and string reorganization by migrating the above two models to targeted attacking, denoted as TG-SPSR. In structure partition phase, besides dividing passwords to basic structure similar to PCFG, we additionally define a trajectory-based keyboard pattern in the basic grammar and introduce index bits to accurately characterize the position of special characters. Moreover, we also construct a BiLSTM recurrent neural network classifier to characterize the behavior of password reuse and modification after defining nine kinds of modification rules. Extensive experimental results indicate that in online attacking, TG-SPSR outperforms traditional trawling attacking algorithms by average about 275%, and respectively outperforms its foremost counterparts, Personal-PCFG, TarGuess-I, by about 70% and 19%; In offline attacking, TG-SPSR outperforms traditional trawling attacking algorithms by average about 90%, outperforms Personal-PCFG and TarGuess-I by 85% and 30%, respectively.

Korean Parsing Model using Various Features of a Syntactic Object (문장성분의 다양한 자질을 이용한 한국어 구문분석 모델)

  • Park So-Young;Kim Soo-Hong;Rim Hae-Chang
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.743-748
    • /
    • 2004
  • In this paper, we propose a probabilistic Korean parsing model using a syntactic feature, a functional feature, a content feature, and a site feature of a syntactic object for effective syntactic disambiguation. It restricts grammar rules to binary-oriented form to deal with Korean properties such as variable word order and constituent ellipsis. In experiments, we analyze the parsing performance of each feature combination. Experimental results show that the combination of different features is preferred to the combination of similar features. Besides, it is remarkable that the function feature is more useful than the combination of the content feature and the size feature.

Driver's Behavioral Pattern in Driver Assistance System (운전자 사용자경험기반의 인지향상 시스템 연구)

  • Jo, Doori;Shin, Donghee
    • Journal of Digital Contents Society
    • /
    • v.15 no.5
    • /
    • pp.579-586
    • /
    • 2014
  • This paper analyzes the recognition of driver's behavior in lane change using context-free grammar. In contrast to conventional pattern recognition techniques, context-free grammars are capable of describing features effectively that are not easily represented by finite symbols. Instead of coordinate data processing that should handle features in multiple concurrent events respectively, effective syntactic analysis was applied for patterning of symbolic sequence. The findings proposed the effective and intuitive method for drivers and researchers in driving safety field. Probabilistic parsing for the improving this research will be the future work to achieve a robust recognition.

Korean Probabilistic Syntactic Model using Head Co-occurrence (중심어 간의 공기정보를 이용한 한국어 확률 구문분석 모델)

  • Lee, Kong-Joo;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.809-816
    • /
    • 2002
  • Since a natural language has inherently structural ambiguities, one of the difficulties of parsing is resolving the structural ambiguities. Recently, a probabilistic approach to tackle this disambiguation problem has received considerable attention because it has some attractions such as automatic learning, wide-coverage, and robustness. In this paper, we focus on Korean probabilistic parsing model using head co-occurrence. We are apt to meet the data sparseness problem when we're using head co-occurrence because it is lexical. Therefore, how to handle this problem is more important than others. To lighten the problem, we have used the restricted and simplified phrase-structure grammar and back-off model as smoothing. The proposed model has showed that the accuracy is about 84%.

Probabilistic Dependency Grammar Induction (한국어 확률 의존문법 학습)

  • 최선화;박혁로
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.513-515
    • /
    • 2003
  • 본 논문에서는 코퍼스를 이용한 확률 의존문법 자동 생성 기술을 다룬다. 의존문법 생성을 위해 구성성분의 기능어들 간의 의존관계를 학습했던 기존 연구와는 달리. 한국어 구성성분은 내용어와 기능어의 결함 형태로 구성되고 임의 구성성룬 기능어와 임의 구성성분 내용어간의 의존관계가 의미가 있다는 사실을 반영한 의존문법 학습방법을 제안한다. KAIST의 트리 부착 코퍼스 31,086문장에서 추출한 30,600문장의 Tagged Corpus을 가지고 학습한 결과 초기문법을 64%까지 줄인 1.101 개의 의존문법을 획득했고. 실험문장 486문장을 Parsing한 결과 73.81%의 Parsing 정확도를 보였다.

  • PDF

Bracketing Input for Accurate Parsing

  • No, Yong-Kyoon
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.358-364
    • /
    • 2007
  • Syntax parsers can benefit from speakers' intuition about constituent structures indicated in the input string in the form of parentheses. Focusing on languages like Korean, whose orthographic convention requires more than one word to be written without spaces, we describe an algorithm for passing the bracketing information across the tagger to the probabilistic CFG parser, together with one for heightening (or penalizing, as the case may be) probabilities of putative constituents as they are suggested by the parser. It is shown that two or three constituents marked in the input suffice to guide the parser to the correct parse as the most likely one, even with sentences that are considered long.

  • PDF