• Title/Summary/Keyword: similarity-based

Search Result 3,619, Processing Time 0.035 seconds

Measuring Web Page Similarity using Tags (태그를 이용한 웹 페이지간의 유사도 측정 방법)

  • Kang, Sang-Wook;Lee, Ki-Yong;Kim, Hyeon-Gyu;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.37 no.2
    • /
    • pp.104-112
    • /
    • 2010
  • Social bookmarking is one of the most interesting trends in the current web environment. In a social bookmarking system, users annotate a web page with tags, which describe the contents of the page. Numerous studies have been done using this information, mostly on enhancing the quality of web search. In this paper, we use this information to measure the semantic similarity between two web pages. Since web pages consist of various types of multimedia data, it is quite difficult to compare the semantics of two web pages by comparing the actual data contained in the pages. With the help of social bookmarks, this comparison can be performed very effectively. In this paper, we propose a new similarity measure between web pages, called Web Page Similarity Based on Entire Tags (WSET), based on social bookmarks. The experimental results show that the proposed measure yields more satisfactory results than the previous ones.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Mepelyzer : Malicious App Identification Mechanism based on Method & Permission Similarity Analysis of Server-Side Polymorphic Mobile Apps (Mepelyzer : 서버 기반 다형상 모바일 앱에 대한 메소드 및 퍼미션 유사도 기반 악성앱 판별)

  • Lee, Han Seong;Lee, Hyung-Woo
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.3
    • /
    • pp.49-61
    • /
    • 2017
  • Recently, convenience and usability are increasing with the development and deployment of various mobile applications on the Android platform. However, important information stored in the smartphone is leaked to the outside without knowing the user since the malicious mobile application is continuously increasing. A variety of mobile vaccines have been developed for the Android platform to detect malicious apps. Recently discovered server-based polymorphic(SSP) malicious mobile apps include obfuscation techniques. Therefore, it is not easy to detect existing mobile vaccines because some other form of malicious app is newly created by using SSP mechanism. In this paper, we analyze the correlation between the similarity of the method in the DEX file constituting the core malicious code and the permission similarity measure through APK de-compiling process for the SSP malicious app. According to the analysis results of DEX method similarity and permission similarity, we could extract the characteristics of SSP malicious apps and found the difference that can be distinguished from the normal app.

A Study on the CBR Pattern using Similarity and the Euclidean Calculation Pattern (유사도와 유클리디안 계산패턴을 이용한 CBR 패턴연구)

  • Yun, Jong-Chan;Kim, Hak-Chul;Kim, Jong-Jin;Youn, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.4
    • /
    • pp.875-885
    • /
    • 2010
  • CBR (Case-Based Reasoning) is a technique to infer the relationships between existing data and case data, and the method to calculate similarity and Euclidean distance is mostly frequently being used. However, since those methods compare all the existing and case data, it also has a demerit that it takes much time for data search and filtering. Therefore, to solve this problem, various researches have been conducted. This paper suggests the method of SE(Speed Euclidean-distance) calculation that utilizes the patterns discovered in the existing process of computing similarity and Euclidean distance. Because SE calculation applies the patterns and weight found during inputting new cases and enables fast data extraction and short operation time, it can enhance computing speed for temporal or spatial restrictions and eliminate unnecessary computing operation. Through this experiment, it has been found that the proposed method improves performance in various computer environments or processing rate more efficiently than the existing method that extracts data using similarity or Euclidean method does.

Question Similarity Measurement of Chinese Crop Diseases and Insect Pests Based on Mixed Information Extraction

  • Zhou, Han;Guo, Xuchao;Liu, Chengqi;Tang, Zhan;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3991-4010
    • /
    • 2021
  • The Question Similarity Measurement of Chinese Crop Diseases and Insect Pests (QSM-CCD&IP) aims to judge the user's tendency to ask questions regarding input problems. The measurement is the basis of the Agricultural Knowledge Question and Answering (Q & A) system, information retrieval, and other tasks. However, the corpus and measurement methods available in this field have some deficiencies. In addition, error propagation may occur when the word boundary features and local context information are ignored when the general method embeds sentences. Hence, these factors make the task challenging. To solve the above problems and tackle the Question Similarity Measurement task in this work, a corpus on Chinese crop diseases and insect pests(CCDIP), which contains 13 categories, was established. Then, taking the CCDIP as the research object, this study proposes a Chinese agricultural text similarity matching model, namely, the AgrCQS. This model is based on mixed information extraction. Specifically, the hybrid embedding layer can enrich character information and improve the recognition ability of the model on the word boundary. The multi-scale local information can be extracted by multi-core convolutional neural network based on multi-weight (MM-CNN). The self-attention mechanism can enhance the fusion ability of the model on global information. In this research, the performance of the AgrCQS on the CCDIP is verified, and three benchmark datasets, namely, AFQMC, LCQMC, and BQ, are used. The accuracy rates are 93.92%, 74.42%, 86.35%, and 83.05%, respectively, which are higher than that of baseline systems without using any external knowledge. Additionally, the proposed method module can be extracted separately and applied to other models, thus providing reference for related research.

A Method for Measuring Inter-Utterance Similarity Considering Various Linguistic Features (다양한 언어적 자질을 고려한 발화간 유사도 측정 방법)

  • Lee, Yeon-Su;Shin, Joong-Hwi;Hong, Gum-Won;Song, Young-In;Lee, Do-Gil;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.61-69
    • /
    • 2009
  • This paper presents an improved method measuring inter-utterance similarity in an example-based dialogue system, which searches the most similar utterance in a dialogue database to generate a response to a given user utterance. Unlike general inter-sentence similarity measures, the inter-utterance similarity measure for example-based dialogue system should consider not only word distribution but also various linguistic features, such as affirmation/negation, tense, modality, sentence type, which affects the natural conversation. However, previous approaches do not sufficiently reflect these features. This paper proposes a new utterance similarity measure by analyzing and reflecting various linguistic features to improve performance in accuracy. Also, by considering substitutability of the features, the proposed method can utilize limited number of examples. Experimental results show that the proposed method achieves 10%p improvement in accuracy compared to the previous method.

The effect of lower limb muscle synergy analysis-based FES system on improvement of the foot drop of stroke patient during walking: a case study (하지 근육 시너지 분석 기반의 FES 시스템이 보행 시 뇌졸중 환자의 족하수 개선에 미치는 영향: 사례 연구)

  • Lim, Taehyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.3
    • /
    • pp.523-529
    • /
    • 2020
  • Foot drop is a common symptom in stroke patients due to central nervous system (CNS) damage, which causes walking disturbances. Functional electrical stimulation (FES) is an effective rehabilitation method for stroke patients with CNS damage. Aim of this study was to determine the effectiveness of 6 weeks FES walking training based lower limb muscle synergy of stroke patients. Lower limb muscle synergies were extracted from electromyography (EMG) using a non-negative matrix factorization algorithm (NMF) method. Cosine similarity and cross correlation were calculated for similarity comparison with healthy subjects. In both stroke patients, the similarity of leg muscle synergy during walking changed to similar to that of healthy subjects due to a decrease in foot drop during. FES walking intervention influenced the similarity of muscle synergies during walking of stroke patients. This intervention has an effective method on foot drop and improving the gait performance of stroke patients.

A New Similarity Measure using Fuzzy Logic for User-based Collaborative Filtering (사용자 기반의 협력필터링을 위한 퍼지 논리를 이용한 새로운 유사도 척도)

  • Lee, Soojung
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.5
    • /
    • pp.61-68
    • /
    • 2018
  • Collaborative filtering is a fundamental technique implemented in many commercial recommender systems and provides a successful service to online users. This technique recommends items by referring to other users who have similar rating records to the current user. Hence, similarity measures critically affect the system performance. This study addresses problems of previous similarity measures and suggests a new similarity measure. The proposed measure reflects the subjectivity or vagueness of user ratings and the users' rating behavior by using fuzzy logic. We conduct experimental studies for performance evaluation, whose results show that the proposed measure demonstrates outstanding performance improvements in terms of prediction accuracy and recommendation accuracy.

Implementation of A Plagiarism Detecting System with Sentence and Syntactic Word Similarities (문장 및 어절 유사도를 이용한 표절 탐지 시스템 구현)

  • Maeng, Joosoo;Park, Ji Su;Shon, Jin Gon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.109-114
    • /
    • 2019
  • The similarity detecting method that is basically used in most plagiarism detecting systems is to use the frequency of shared words based on morphological analysis. However, this method has limitations on detecting accurate degree of similarity, especially when similar words concerning the same topics are used, sentences are partially separately excerpted, or postpositions and endings of words are similar. In order to overcome this problem, we have designed and implemented a plagiarism detecting system that provides more reliable similarity information by measuring sentence similarity and syntactic word similarity in addition to the conventional word similarity. We have carried out a comparison of on our system with a conventional system using only word similarity. The comparative experiment has shown that our system can detect plagiarized document that the conventional system can detect or cannot.

Clustering-based Statistical Machine Translation Using Syntactic Structure and Word Similarity (문장구조 유사도와 단어 유사도를 이용한 클러스터링 기반의 통계기계번역)

  • Kim, Han-Kyong;Na, Hwi-Dong;Li, Jin-Ji;Lee, Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.297-304
    • /
    • 2010
  • Clustering method which based on sentence type or document genre is a technique used to improve translation quality of SMT(statistical machine translation) by domain-specific translation. But there is no previous research using sentence type and document genre information simultaneously. In this paper, we suggest an integrated clustering method that classifying sentence type by syntactic structure similarity and document genre by word similarity information. We interpolated domain-specific models from clusters with general models to improve translation quality of SMT system. Kernel function and cosine measures are applied to calculate structural similarity and word similarity. With these similarities, we used machine learning algorithms similar to K-means to clustering. In Japanese-English patent translation corpus, we got 2.5% point relative improvements of translation quality at optimal case.