• Title/Summary/Keyword: Lexical similarity

Search Result 39, Processing Time 0.025 seconds

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation (한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법)

  • Lee, Ha-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1588-1597
    • /
    • 1996
  • This paper describes an estimation method for lexical probabilities in Korean lexical disambiguation. In the stochastic to lexical disambiguation lexical probabilities and contextual probabilities are generally estimated on the basis of statistical data extracted form corpora. It is desirable to apply lexical probabilities in terms of word phrases for Korean because sentences are spaced in the unit of word phrase. However, Korean word phrases are so multiform that there are more or less chances that lexical probabilities cannot be estimated directly in terms of word phrases though fairly large corpora are used. To overcome this problem, similarity for word phrases is defined from the lexical analysis point of view in this research and an estimation method for Korean lexical probabilities based on the similarity is proposed. In this method, when a lexical probability for a word phrase cannot be estimated directly, it is estimated indirectly through the word phrase similar to the given one. Experimental results show that the proposed approach is effective for Korean lexical disambiguation.

  • PDF

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

Korean Semantic Similarity Measures for the Vector Space Models

  • Lee, Young-In;Lee, Hyun-jung;Koo, Myoung-Wan;Cho, Sook Whan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.49-55
    • /
    • 2015
  • It is argued in this paper that, in determining semantic similarity, Korean words should be recategorized with a focus on the semantic relation to ontology in light of cross-linguistic morphological variations. It is proposed, in particular, that Korean semantic similarity should be measured on three tracks, human judgements track, relatedness track, and cross-part-of-speech relations track. As demonstrated in Yang et al. (2015), GloVe, the unsupervised learning machine on semantic similarity, is applicable to Korean with its performance being compared with human judgement results. Based on this compatability, it was further thought that the model's performance might most likely vary with different kinds of specific relations in different languages. An attempt was made to analyze them in terms of two major Korean-specific categories involved in their lexical and cross-POS-relations. It is concluded that languages must be analyzed by varying methods so that semantic components across languages may allow varying semantic distance in the vector space models.

Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach (유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.12 no.1
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

The neighborhood size and frequency effect in Korean words (한국어 단어재인에서 나타나는 이웃효과)

  • Kwon You-An;Cho Hye-Suk;Nam Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.117-120
    • /
    • 2006
  • This paper examined two hypotheses. Firstly, if the first syllable of word play an important role in visual word recognition, it may be the unit of word neighbor. Secondly, if the first syllable is the unit of lexical access, the neighborhood size effect and the neighborhood frequency effect would appear in a lexical decision task and a form primed lexical decision task. We conducted two experiments. Experiment 1 showed that words had large neighbors made a inhibitory effect in the LDT(lexical decision task). Experiment 2 showed the interaction between the neighborhood frequency effectand the word form similarity in the form primed LDT. We concluded that the first syllable in Korean words might be the unit of word neighborhood and play a central role in a lexical access.

  • PDF

Ontology Mapping using Semantic Relationship Set of the WordNet (워드넷의 의미 관계 집합을 이용한 온톨로지 매핑)

  • Kwak, Jung-Ae;Yong, Hwan-Seung
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.466-475
    • /
    • 2009
  • Considerable research in the field of ontology mapping has been done when information sharing and reuse becomes necessary by a variety of ontology development. Ontology mapping method consists of the lexical, structural, instance, and logical inference similarity computing. Lexical similarity computing used in most ontology mapping methods performs an ontology mapping by using the synonym set defined in the WordNet. In this paper, we define the Super Word Set including the hypenym, hyponym, holonym, and meronym set and propose an ontology mapping method using the Super Word Set. The results of experiments show that our method improves the performance by up to 12%, compared with previous ontology mapping method.

Automatic Construction of Syntactic Relation in Lexical Network(U-WIN) (어휘망(U-WIN)의 구문관계 자동구축)

  • Im, Ji-Hui;Choe, Ho-Seop;Ock, Cheol-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.627-635
    • /
    • 2008
  • An extended form of lexical network is explored by presenting U-WIN, which applies lexical relations that include not only semantic relations but also conceptual relations, morphological relations and syntactic relations, in a way different with existing lexical networks that have been centered around linking structures with semantic relations. So, This study introduces the new methodology for constructing a syntactic relation automatically. First of all, we extract probable nouns which related to verb based on verb's sentence type. However we should decided the extracted noun's meaning because extracted noun has many meanings. So in this study, we propose that noun's meaning is decided by the example matching rule/syntactic pattern/semantic similarity, frequency information. In addition, syntactic pattern is expanded using nouns which have high frequency in corpora.

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.

Multi-level Mapping of Ontologies Based on Lexical and Structural Information (어휘와 구조 정보에 기반한 온톨로지의 다단계 매핑)

  • Hwang, Se-Chan;Kang, Sin-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.1
    • /
    • pp.42-48
    • /
    • 2012
  • Since the Semantic Web emerged, ontology has been widely used in web environment. Even ontologies belong to the same domain, they may contain same meaning different words, or different meaning same words according to their development background and the type of utilization. In order to share and reuse the ontologies, ontology mapping is required. This paper presents a ontology mapping method that consists of the initial process of multi-level mapping based on lexical information, and the second mapping process using the lexical results and structural similarity. Mapping performance was improved by additionally expanding structural information of blank nodes, which have no lexical information. Through experiments, our method achieved 86.38% in F1-measure.