• Title/Summary/Keyword: Lexical model

Search Result 100, Processing Time 0.023 seconds

A Muti-Resolution Approach to Restaurant Named Entity Recognition in Korean Web

  • Kang, Bo-Yeong;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.277-284
    • /
    • 2012
  • Named entity recognition (NER) technique can play a crucial role in extracting information from the web. While NER systems with relatively high performances have been developed based on careful manipulation of terms with a statistical model, term mismatches often degrade the performance of such systems because the strings of all the candidate entities are not known a priori. Despite the importance of lexical-level term mismatches for NER systems, however, most NER approaches developed to date utilize only the term string itself and simple term-level features, and do not exploit the semantic features of terms which can handle the variations of terms effectively. As a solution to this problem, here we propose to match the semantic concepts of term units in restaurant named entities (NEs), where these units are automatically generated from multiple resolutions of a semantic tree. As a test experiment, we applied our restaurant NER scheme to 49,153 nouns in Korean restaurant web pages. Our scheme achieved an average accuracy of 87.89% when applied to test data, which was considerably better than the 78.70% accuracy obtained using the baseline system.

A Sentence Reduction Method using Part-of-Speech Information and Templates (품사 정보와 템플릿을 이용한 문장 축소 방법)

  • Lee, Seung-Soo;Yeom, Ki-Won;Park, Ji-Hyung;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.313-324
    • /
    • 2008
  • A sentence reduction is the information compression process which removes extraneous words and phrases and retains basic meaning of the original sentence. Most researches in the sentence reduction have required a large number of lexical and syntactic resources and focused on extracting or removing extraneous constituents such as words, phrases and clauses of the sentence via the complicated parsing process. However, these researches have some problems. First, the lexical resource which can be obtained in loaming data is very limited. Second, it is difficult to reduce the sentence to languages that have no method for reliable syntactic parsing because of an ambiguity and exceptional expression of the sentence. In order to solve these problems, we propose the sentence reduction method which uses templates and POS(part of speech) information without a parsing process. In our proposed method, we create a new sentence using both Sentence Reduction Templates that decide the reduction sentence form and Grammatical POS-based Reduction Rules that compose the grammatical sentence structure. In addition, We use Viterbi algorithms at HMM(Hidden Markov Models) to avoid the exponential calculation problem which occurs under applying to Sentence Reduction Templates. Finally, our experiments show that the proposed method achieves acceptable results in comparison to the previous sentence reduction methods.

Verification of the Usefulness of the Mock TOEIC Test using Corpus Indices : Focusing on the Analysis of Difficulty and Discrimination (코퍼스 지표를 활용한 모의 토익시험의 유용성 검증 : 난이도와 변별도 분석을 중심으로)

  • Lee, Yena
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.10
    • /
    • pp.576-593
    • /
    • 2021
  • In this study, in order to investigate the factors that affect the percentage of correct answers and the degree of discrimination of the TOEIC test, a regression analysis was performed using corpus indicators that influence correct answer rate and the degree of discrimination for each part derived from the item analysis. The basic calculation word_length, consistency index LSA_overlap_adjacent_sentences, lexical diversity MTLD_VOCD, conjunction All_logical_causal_connectives_incidence, situational model casual_particles_causal_verbs_Ratio, syntactic complexity Left_embeddedness, and syntactic pattern density Infinitive_density were found to have negative effects. These factors that lower the correct answer rate can be utilized when setting learning goals. Vocabulary diversity index MTLD_VOCD, conjunction Additive_connectives_incidence, syntactic pattern density Infinitive_density, and lexical information person1_2_pronoun_incidence were found to have a positive effect. Factors influencing the increase in discrimination may provide important information for developing a learning program.

Incremental Enrichment of Ontologies through Feature-based Pattern Variations (자질별 관계 패턴의 다변화를 통한 온톨로지 확장)

  • Lee, Sheen-Mok;Chang, Du-Seong;Shin, Ji-Ae
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.365-374
    • /
    • 2008
  • In this paper, we propose a model to enrich an ontology by incrementally extending the relations through variations of patterns. In order to generalize initial patterns, combinations of features are considered as candidate patterns. The candidate patterns are used to extract relations from Wikipedia, which are sorted out according to reliability based on corpus frequency. Selected patterns then are used to extract relations, while extracted relations are again used to extend the patterns of the relation. Through making variations of patterns in incremental enrichment process, the range of pattern selection is broaden and refined, which can increase coverage and accuracy of relations extracted. In the experiments with single-feature based pattern models, we observe that the features of lexical, headword, and hypernym provide reliable information, while POS and syntactic features provide general information that is useful for enrichment of relations. Based on observations on the feature types that are appropriate for each syntactic unit type, we propose a pattern model based on the composition of features as our ongoing work.

A Study on the Extension of Disaster Safety Information Service based on Linked Open Data (LOD기반의 재난안전 정보서비스 확장에 관한 연구)

  • Kim, Tae-Young;Gang, Ju-Yeon;Kim, Hye-Young;Kim, Yong
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.3
    • /
    • pp.163-188
    • /
    • 2017
  • This study aims to propose disaster safety information service model based on LOD for effective management and dissemination of the information. To achieve the aim of this study, current state of disaster safety information was analyzed through online search and face-to-face interviews, and then the information was divided into 6 types. Finally, this study proposed specific process of building disaster safety information LOD service with considerations reflecting the information characteristics. The process for building LOD was based on Guidelines for Building Linked Data written by National Information Society Agency. Especially, ontology concept model was defined by using standard lexical resources and modeling tools based on 6 types of disaster safety information, and classes and properties were proposed. The results of this study will make disaster safety information more useful for common people.

Korean Semantic Role Labeling Using Case Frame Dictionary and Subcategorization (격틀 사전과 하위 범주 정보를 이용한 한국어 의미역 결정)

  • Kim, Wan-Su;Ock, Cheol-Young
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1376-1384
    • /
    • 2016
  • Computers require analytic and processing capability for all possibilities of human expression in order to process sentences like human beings. Linguistic information processing thus forms the initial basis. When analyzing a sentence syntactically, it is necessary to divide the sentence into components, find obligatory arguments focusing on predicates, identify the sentence core, and understand semantic relations between the arguments and predicates. In this study, the method applied a case frame dictionary based on The Korean Standard Dictionary of The National Institute of the Korean Language; in addition, we used a CRF Model that constructed subcategorization of predicates as featured in Korean Lexical Semantic Network (UWordMap) for semantic role labeling. Automatically tagged semantic roles based on the CRF model, which established the information of words, predicates, the case-frame dictionary and hypernyms of words as features, were used. This method demonstrated higher performance in comparison with the existing method, with accuracy rate of 83.13% as compared to 81.2%, respectively.

Chatting Pattern Based Game BOT Detection: Do They Talk Like Us?

  • Kang, Ah Reum;Kim, Huy Kang;Woo, Jiyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2866-2879
    • /
    • 2012
  • Among the various security threats in online games, the use of game bots is the most serious problem. Previous studies on game bot detection have proposed many methods to find out discriminable behaviors of bots from humans based on the fact that a bot's playing pattern is different from that of a human. In this paper, we look at the chatting data that reflects gamers' communication patterns and propose a communication pattern analysis framework for online game bot detection. In massive multi-user online role playing games (MMORPGs), game bots use chatting message in a different way from normal users. We derive four features; a network feature, a descriptive feature, a diversity feature and a text feature. To measure the diversity of communication patterns, we propose lightly summarized indices, which are computationally inexpensive and intuitive. For text features, we derive lexical, syntactic and semantic features from chatting contents using text mining techniques. To build the learning model for game bot detection, we test and compare three classification models: the random forest, logistic regression and lazy learning. We apply the proposed framework to AION operated by NCsoft, a leading online game company in Korea. As a result of our experiments, we found that the random forest outperforms the logistic regression and lazy learning. The model that employs the entire feature sets gives the highest performance with a precision value of 0.893 and a recall value of 0.965.

Definition and Extraction of Causal Relations for Question-Answering on Fault-Diagnosis of Electronic Devices (전자장비 고장진단 질의응답을 위한 인과관계 정의 및 추출)

  • Lee, Sheen-Mok;Shin, Ji-Ae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.335-346
    • /
    • 2008
  • Causal relations in ontology should be defined based on the inference types necessary to solve problems specific to application as well as domain. In this paper, we present a model to define and extract causal relations for application ontology for Question-Answering (QA) on fault-diagnosis of electronic devices. Causal categories are defined by analyzing generic patterns of QA application; the relations between concepts in the corpus belonging to the causal categories are defined as causal relations. Instances of casual relations are extracted using lexical patterns in the concept definitions of domain, and extended incrementally with information from thesaurus. On the evaluation by domain specialists, our model shows precision of 92.3% in classification of relations and precision of 80.7% in identifying causal relations at the extraction phase.

Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning (딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅)

  • Kim, Jungmin;Kang, Seungshik;Kim, Hyeokman
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.

Development of a Malicious URL Machine Learning Detection Model Reflecting the Main Feature of URLs (URL 주요특징을 고려한 악성URL 머신러닝 탐지모델 개발)

  • Kim, Youngjun;Lee, Jaewoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1786-1793
    • /
    • 2022
  • Cyber-attacks such as smishing and hacking mail exploiting COVID-19, political and social issues, have recently been continuous. Machine learning and deep learning technology research are conducted to prevent any damage due to cyber-attacks inducing malicious links to breach personal data. It has been concluded as a lack of basis to judge the attacks to be malicious in previous studies since the features of data set were excessively simple. In this paper, nine main features of three types, "URL Days", "URL Word", and "URL Abnormal", were proposed in addition to lexical features of URL which have been reflected in previous research. F1-Score and accuracy index were measured through four different types of machine learning algorithms. An improvement of 0.9% in a result and the highest value, 98.5%, were examined in F1-Score and accuracy through comparatively analyzing an existing research. These outcomes proved the main features contribute to elevating the values in both accuracy and performance.