• Title/Summary/Keyword: semantic kernel

Search Result 20, Processing Time 0.023 seconds

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.

A New Semantic Kernel Function for Online Anomaly Detection of Software

  • Parsa, Saeed;Naree, Somaye Arabi
    • ETRI Journal
    • /
    • v.34 no.2
    • /
    • pp.288-291
    • /
    • 2012
  • In this letter, a new online anomaly detection approach for software systems is proposed. The novelty of the proposed approach is to apply a new semantic kernel function for a support vector machine (SVM) classifier to detect fault-suspicious execution paths at runtime in a reasonable amount of time. The kernel uses a new sequence matching algorithm to measure similarities among program execution paths in a customized feature space whose dimensions represent the largest common subpaths among the execution paths. To increase the precision of the SVM classifier, each common subpath is given weights according to its ability to discern executions as correct or anomalous. Experiment results show that compared with the known kernels, the proposed SVM kernel will improve the time overhead of online anomaly detection by up to 170%, while improving the precision of anomaly alerts by up to 140%.

A Semantic Representation Based-on Term Co-occurrence Network and Graph Kernel

  • Noh, Tae-Gil;Park, Seong-Bae;Lee, Sang-Jo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.238-246
    • /
    • 2011
  • This paper proposes a new semantic representation and its associated similarity measure. The representation expresses textual context observed in a context of a certain term as a network where nodes are terms and edges are the number of cooccurrences between connected terms. To compare terms represented in networks, a graph kernel is adopted as a similarity measure. The proposed representation has two notable merits compared with previous semantic representations. First, it can process polysemous words in a better way than a vector representation. A network of a polysemous term is regarded as a combination of sub-networks that represent senses and the appropriate sub-network is identified by context before compared by the kernel. Second, the representation permits not only words but also senses or contexts to be represented directly from corresponding set of terms. The validity of the representation and its similarity measure is evaluated with two tasks: synonym test and unsupervised word sense disambiguation. The method performed well and could compete with the state-of-the-art unsupervised methods.

An Intelligent Marking System based on Semantic Kernel and Korean WordNet (의미커널과 한글 워드넷에 기반한 지능형 채점 시스템)

  • Cho Woojin;Oh Jungseok;Lee Jaeyoung;Kim Yu-Seop
    • The KIPS Transactions:PartA
    • /
    • v.12A no.6 s.96
    • /
    • pp.539-546
    • /
    • 2005
  • Recently, as the number of Internet users are growing explosively, e-learning has been applied spread, as well as remote evaluation of intellectual capacity However, only the multiple choice and/or the objective tests have been applied to the e-learning, because of difficulty of natural language processing. For the intelligent marking of short-essay typed answer papers with rapidness and fairness, this work utilize heterogenous linguistic knowledges. Firstly, we construct the semantic kernel from un tagged corpus. Then the answer papers of students and instructors are transformed into the vector form. Finally, we evaluate the similarity between the papers by using the semantic kernel and decide whether the answer paper is correct or not, based on the similarity values. For the construction of the semantic kernel, we used latent semantic analysis based on the vector space model. Further we try to reduce the problem of information shortage, by integrating Korean Word Net. For the construction of the semantic kernel we collected 38,727 newspaper articles and extracted 75,175 indexed terms. In the experiment, about 0.894 correlation coefficient value, between the marking results from this system and the human instructors, was acquired.

Learning Probabilistic Kernel from Latent Dirichlet Allocation

  • Lv, Qi;Pang, Lin;Li, Xiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2527-2545
    • /
    • 2016
  • Measuring the similarity of given samples is a key problem of recognition, clustering, retrieval and related applications. A number of works, e.g. kernel method and metric learning, have been contributed to this problem. The challenge of similarity learning is to find a similarity robust to intra-class variance and simultaneously selective to inter-class characteristic. We observed that, the similarity measure can be improved if the data distribution and hidden semantic information are exploited in a more sophisticated way. In this paper, we propose a similarity learning approach for retrieval and recognition. The approach, termed as LDA-FEK, derives free energy kernel (FEK) from Latent Dirichlet Allocation (LDA). First, it trains LDA and constructs kernel using the parameters and variables of the trained model. Then, the unknown kernel parameters are learned by a discriminative learning approach. The main contributions of the proposed method are twofold: (1) the method is computationally efficient and scalable since the parameters in kernel are determined in a staged way; (2) the method exploits data distribution and semantic level hidden information by means of LDA. To evaluate the performance of LDA-FEK, we apply it for image retrieval over two data sets and for text categorization on four popular data sets. The results show the competitive performance of our method.

Ontology Alignment based on Parse Tree Kernel usig Structural and Semantic Information (구조 및 의미 정보를 활용한 파스 트리 커널 기반의 온톨로지 정렬 방법)

  • Son, Jeong-Woo;Park, Seong-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.329-334
    • /
    • 2009
  • The ontology alignment has two kinds of major problems. First, the features used for ontology alignment are usually defined by experts, but it is highly possible for some critical features to be excluded from the feature set. Second, the semantic and the structural similarities are usually computed independently, and then they are combined in an ad-hoc way where the weights are determined heuristically. This paper proposes the modified parse tree kernel (MPTK) for ontology alignment. In order to compute the similarity between entities in the ontologies, a tree is adopted as a representation of an ontology. After transforming an ontology into a set of trees, their similarity is computed using MPTK without explicit enumeration of features. In computing the similarity between trees, the approximate string matching is adopted to naturally reflect not only the structural information but also the semantic information. According to a series of experiments with a standard data set, the kernel method outperforms other structural similarities such as GMO. In addition, the proposed method shows the state-of-the-art performance in the ontology alignment.

Infrared Target Recognition using Heterogeneous Features with Multi-kernel Transfer Learning

  • Wang, Xin;Zhang, Xin;Ning, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3762-3781
    • /
    • 2020
  • Infrared pedestrian target recognition is a vital problem of significant interest in computer vision. In this work, a novel infrared pedestrian target recognition method that uses heterogeneous features with multi-kernel transfer learning is proposed. Firstly, to exploit the characteristics of infrared pedestrian targets fully, a novel multi-scale monogenic filtering-based completed local binary pattern descriptor, referred to as MSMF-CLBP, is designed to extract the texture information, and then an improved histogram of oriented gradient-fisher vector descriptor, referred to as HOG-FV, is proposed to extract the shape information. Second, to enrich the semantic content of feature expression, these two heterogeneous features are integrated to get more complete representation for infrared pedestrian targets. Third, to overcome the defects, such as poor generalization, scarcity of tagged infrared samples, distributional and semantic deviations between the training and testing samples, of the state-of-the-art classifiers, an effective multi-kernel transfer learning classifier called MK-TrAdaBoost is designed. Experimental results show that the proposed method outperforms many state-of-the-art recognition approaches for infrared pedestrian targets.

Kernel-based sentence classification for protein-protein interaction (커널 기반의 '단백질-단백질 작용' 의미 포함 문장 분류)

  • Kim Seong-Hwan;Eom Jae-Hong;Zhang Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.286-288
    • /
    • 2005
  • 본 논문에서는 tree kernel을 이용 '단백질-단백질 작용' 내용 포함 문장의 추출 방법을 제시한다. Tree kernel은 convolution kernel의 하나로서, 이를 이용하여 파싱 트리(parsing tree)로 표현된 문장을 데이터로 하여 '단백질-단백질 작용' 내용을 포함하고 있는 문장을 그렇지 않은 문장으로부터 분류할 수 있다. 문장 전체를 데이터로 사용하는 것보다 관련 영역을 서브트리(sub-tree)로 추출하여 사용한 것이 더 효과적임을 확인할 수 있었고, kernel계산에 있어 파싱 트리의 태그 내용이 중요한 역할을 하기 때문에 이를 '단백질-단백질 작용'의 의미를 반영할 수 있도록 semantic하게 변환한 효과 및 트리의 길이에 따른 영향도 실험해 보았다. 문제에 사용된 데이터의 양이 다소 적었지만, 데이터 표현 방식에 따라 파싱이나 패턴기법을 이용한 기존의 방법과 비교해 좋은 성능을 보일 수 있다는 가능성을 확인할 수 있었다.

  • PDF

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.

Estimation of Document Similarity using Semantic Kernel Derived from Helmholtz Machines (헬름홀츠머신 학습 기반의 의미 커널을 이용한 문서 유사도 측정)

  • 장정호;김유섭;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.440-442
    • /
    • 2003
  • 문서 집합 내의 개념 또는 의미 관계의 자동 분석은 보다 효율적인 정보 획득과 단어수준 이상의 개념 수준에서의 운서 비교를 가능하게 한다. 본 논문에서는 은닉변수모델을 이용하여 문서 집합으로부터 단어들 간의 의미관계를 자동적으로 추출하고 이를 통해 문서간 유사도 측정을 효과적으로 하기 위한 방안을 제시한다. 은닉변수 모델로는 다중요인모델의 학습이 용이한 헬름홀츠 머신을 활용하묘 이의 학습 결과에 기반하여, 문서간 비교를 한 의미 커널(semantic kernel)을 구축한다. 2개의 문서 집합 HEDLINE과 CACM 데이터에 대한 검색 실험에서, 제안된 기법을 적응함으로써 기본 VSM(Vector Space Model) 에 비해 20% 이상의 평균 정확도 향상을 이를 수 있었다.

  • PDF