• Title/Summary/Keyword: Sentence Similarity

Search Result 81, Processing Time 0.039 seconds

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.

Sentence Interaction-based Document Similarity Models for News Clustering (뉴스 클러스터링을 위한 문장 간 상호 작용 기반 문서 쌍 유사도 측정 모델들)

  • Choi, Seonghwan;Son, Donghyun;Lee, Hochang
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.401-407
    • /
    • 2020
  • 뉴스 클러스터링에서 두 문서 간의 유사도는 클러스터의 특성을 결정하는 중요한 부분 중 하나이다. 전통적인 단어 기반 접근 방법인 TF-IDF 벡터 유사도는 문서 간의 의미적인 유사도를 반영하지 못하고, 기존 딥러닝 기반 접근 방법인 시퀀스 유사도 측정 모델은 문서 단위에서 나타나는 긴 문맥을 반영하지 못하는 문제점을 가지고 있다. 이 논문에서 우리는 뉴스 클러스터링에 적합한 문서 쌍 유사도 모델을 구성하기 위하여 문서 쌍에서 생성되는 다수의 문장 표현들 간의 유사도 정보를 종합하여 전체 문서 쌍의 유사도를 측정하는 네 가지 유사도 모델을 제안하였다. 이 접근 방법들은 하나의 벡터로 전체 문서 표현을 압축하는 HAN (hierarchical attention network)와 같은 접근 방법에 비해 두 문서에서 나타나는 문장들 간의 직접적인 유사도를 통해서 전체 문서 쌍의 유사도를 추정한다. 그리고 기존 접근 방법들인 SVM과 HAN과 제안하는 네 가지 유사도 모델을 통해서 두 문서 쌍 간의 유사도 측정 실험을 하였고, 두 가지 접근 방법에서 기존 접근 방법들보다 높은 성능이 나타나는 것을 확인할 수 있었고, 그래프 기반 접근 방법과 유사한 성능을 보이지만 더 효율적으로 문서 유사도를 측정하는 것을 확인하였다.

  • PDF

A Method for Extracting Equipment Specifications from Plant Documents and Cross-Validation Approach with Similar Equipment Specifications (플랜트 설비 문서로부터 설비사양 추출 및 유사설비 사양 교차 검증 접근법)

  • Jae Hyun Lee;Seungeon Choi;Hyo Won Suh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.55-68
    • /
    • 2024
  • Plant engineering companies create or refer to requirements documents for each related field, such as plant process/equipment/piping/instrumentation, in different engineering departments. The process-related requirements document includes not only a description of the process but also the requirements of the equipment or related facilities that will operate it. Since the authors and reviewers of the requirements documents are different, there is a possibility that inconsistencies may occur between equipment or parts design specifications described in different requirement documents. Ensuring consistency in these matters can increase the reliability of the overall plant design information. However, the amount of documents and the scattered nature of requirements for a same equipment and parts across different documents make it challenging for engineers to trace and manage requirements. This paper proposes a method to analyze requirement sentences and calculate the similarity of requirement sentences in order to identify semantically identical sentences. To calculate the similarity of requirement sentences, we propose a named entity recognition method to identify compound words for the parts and properties that are semantically central to the requirements. A method to calculate the similarity of the identified compound words for parts and properties is also proposed. The proposed method is explained using sentences in practical documents, and experimental results are described.

Investigating an Automatic Method in Summarizing a Video Speech Using User-Assigned Tags (이용자 태그를 활용한 비디오 스피치 요약의 자동 생성 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.46 no.1
    • /
    • pp.163-181
    • /
    • 2012
  • We investigated how useful video tags were in summarizing video speech and how valuable positional information was for speech summarization. Furthermore, we examined the similarity among sentences selected for a speech summary to reduce its redundancy. Based on such analysis results, we then designed and evaluated a method for automatically summarizing speech transcripts using a modified Maximum Marginal Relevance model. This model did not only reduce redundancy but it also enabled the use of social tags, title words, and sentence positional information. Finally, we compared the proposed method to the Extractor system in which key sentences of a video speech were chosen using the frequency and location information of speech content words. Results showed that the precision and recall rates of the proposed method were higher than those of the Extractor system, although there was no significant difference in the recall rates.

Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS (SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구)

  • Lee, Jong-Hwa
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

WV-BTM: A Technique on Improving Accuracy of Topic Model for Short Texts in SNS (WV-BTM: SNS 단문의 주제 분석을 위한 토픽 모델 정확도 개선 기법)

  • Song, Ae-Rin;Park, Young-Ho
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • As the amount of users and data of NS explosively increased, research based on SNS Big data became active. In social mining, Latent Dirichlet Allocation(LDA), which is a typical topic model technique, is used to identify the similarity of each text from non-classified large-volume SNS text big data and to extract trends therefrom. However, LDA has the limitation that it is difficult to deduce a high-level topic due to the semantic sparsity of non-frequent word occurrence in the short sentence data. The BTM study improved the limitations of this LDA through a combination of two words. However, BTM also has a limitation that it is impossible to calculate the weight considering the relation with each subject because it is influenced more by the high frequency word among the combined words. In this paper, we propose a technique to improve the accuracy of existing BTM by reflecting semantic relation between words.

A Method for Detection and Correction of Pseudo-Semantic Errors Due to Typographical Errors (철자오류에 기인한 가의미 오류의 검출 및 교정 방법)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.173-182
    • /
    • 2013
  • Typographical mistakes made in the writing process of drafts of electronic documents are more common than any other type of errors. The majority of these errors caused by mistyping are regarded as consequently still typo-errors, but a considerable number of them are developed into the grammatical errors and the semantic errors. Pseudo semantic errors among these errors due to typographical errors have more noticeable peculiarities than pure semantic errors between senses of surrounding context words within a sentence. These semantic errors can be detected and corrected by simple algorithm based on the co-occurrence frequency because of their prominent contextual discrepancy. I propose a method for detection and correction based on the co-occurrence frequency in order to detect semantic errors due to typo-errors. The co-occurrence frequency in proposed method is counted for only words with immediate dependency relation, and the cosine similarity measure is used in order to detect pseudo semantic errors. From the presented experimental results, the proposed method is expected to help improve the detecting rate of overall proofreading system by about 2~3%.

A Hybrid Method of Verb disambiguation in Machine Translation (기계번역에서 동사 모호성 해결에 관한 하이브리드 기법)

  • Moon, Yoo-Jin;Martha Palmer
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.3
    • /
    • pp.681-687
    • /
    • 1998
  • The paper presents a hybrid mcthod for disambiguation of the verb meaning in the machine translation. The presented verb translation algorithm is to perform the concept-based method and the statistics-based method simultaneously. It uses a collocation dictionary, WordNct and the statistical information extracted from corpus. In the transfer phase of the machine translation, it tries to find the target word of the source verb. If it fails, it refers to Word Net to try to find it by calculating word similarities between the logical constraints of the source sentence and those in the collocation dictionary. At the same time, it refers to the statistical information extracted from corpus to try to find it by calculating co-occurrence similarity knowledge. The experimental result shows that the algorithm performs more accurate verb translation than the other algorithms and improves accuracy of the verb translation by 24.8% compared to the collocation-based method.

  • PDF

A New Similarity Measure for Improving Ranking in QA Systems (질의응답시스템 응답순위 개선을 위한 새로운 유사도 계산방법)

  • Kim Myung-Gwan;Park Young-Tack
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.529-536
    • /
    • 2004
  • The main idea of this paper is to combine position information in sentence and query type classification to make the documents ranking to query more accessible. First, the use of conceptual graphs for the representation of document contents In information retrieval is discussed. The method is based on well-known strategies of text comparison, such as Dice Coefficient, with position-based weighted term. Second, we introduce a method for learning query type classification that improves the ability to retrieve answers to questions from Question Answering system. Proposed methods employ naive bayes classification in machine learning fields. And, we used a collection of approximately 30,000 question-answer pairs for training, obtained from Frequently Asked Question(FAQ) files on various subjects. The evaluation on a set of queries from international TREC-9 question answering track shows that the method with machine learning outperforms the underline other systems in TREC-9 (0.29 for mean reciprocal rank and 55.1% for precision).

K-Means Clustering with Content Based Doctor Recommendation for Cancer

  • kumar, Rethina;Ganapathy, Gopinath;Kang, Jeong-Jin
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.167-176
    • /
    • 2020
  • Recommendation Systems is the top requirements for many people and researchers for the need required by them with the proper suggestion with their personal indeed, sorting and suggesting doctor to the patient. Most of the rating prediction in recommendation systems are based on patient's feedback with their information regarding their treatment. Patient's preferences will be based on the historical behaviour of similar patients. The similarity between the patients is generally measured by the patient's feedback with the information about the doctor with the treatment methods with their success rate. This paper presents a new method of predicting Top Ranked Doctor's in recommendation systems. The proposed Recommendation system starts by identifying the similar doctor based on the patients' health requirements and cluster them using K-Means Efficient Clustering. Our proposed K-Means Clustering with Content Based Doctor Recommendation for Cancer (KMC-CBD) helps users to find an optimal solution. The core component of KMC-CBD Recommended system suggests patients with top recommended doctors similar to the other patients who already treated with that doctor and supports the choice of the doctor and the hospital for the patient requirements and their health condition. The recommendation System first computes K-Means Clustering is an unsupervised learning among Doctors according to their profile and list the Doctors according to their Medical profile. Then the Content based doctor recommendation System generates a Top rated list of doctors for the given patient profile by exploiting health data shared by the crowd internet community. Patients can find the most similar patients, so that they can analyze how they are treated for the similar diseases, and they can send and receive suggestions to solve their health issues. In order to the improve Recommendation system efficiency, the patient can express their health information by a natural-language sentence. The Recommendation system analyze and identifies the most relevant medical area for that specific case and uses this information for the recommendation task. Provided by users as well as the recommended system to suggest the right doctors for a specific health problem. Our proposed system is implemented in Python with necessary functions and dataset.