• Title/Summary/Keyword: Text Similarity

Search Result 277, Processing Time 0.028 seconds

Do Words in Central Bank Press Releases Affect Thailand's Financial Markets?

  • CHATCHAWAN, Sapphasak
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.4
    • /
    • pp.113-124
    • /
    • 2021
  • The study investigates how financial markets respond to a shock to tone and semantic similarity of the Bank of Thailand press releases. The techniques in natural language processing are employed to quantify the tone and the semantic similarity of 69 press releases from 2010 to 2018. The corpus of the press releases is accessible to the general public. Stock market returns and bond yields are measured by logged return on SET50 and short-term and long-term government bonds, respectively. Data are daily from January 4, 2010, to August 8, 2019. The study uses the Structural Vector Auto Regressive model (SVAR) to analyze the effects of unanticipated and temporary shocks to the tone and the semantic similarity on bond yields and stock market returns. Impulse response functions are also constructed for the analysis. The results show that 1-month, 3-month, 6-month and 1-year bond yields significantly increase in response to a positive shock to the tone of press releases and 1-month, 3-month, 6-month, 1-year and 25-year bond yields significantly increase in response to a positive shock to the semantic similarity. Interestingly, stock market returns obtained from the SET50 index insignificantly respond to the shocks from the tone and the semantic similarity of the press releases.

Image Based Text Matching Using Local Crowdedness and Hausdorff Distance (지역 밀집도 및 Hausdorff 거리를 이용한 영상기반 텍스트 매칭)

  • Son, Hwa-Jeong;Kim, Ji-Soo;Park, Mi-Seon;Yoo, Jae-Myeong;Kim, Soo-Hyung
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.10
    • /
    • pp.134-142
    • /
    • 2006
  • In this paper, we investigate a Hausdorff distance, which is used for the measurement of image similarity, to see whether it is also effective for document retrieval. The proposed method uses a local crowdedness and a Hausdorff distance to locate text images by determining whether a pair of images scanned at different time comes from the same text or not. To reduce the processing time, which is one of the disadvantages of a Hausdorff distance algorithm, we adopt a local crowdedness for feature point extraction. We apply the proposed method to 190 pairs of the same class and 190 pairs of the different class collected from postal envelop images. The results show that the modified Hausdorff distance proposed in this paper performed well in locating the tort region and calculating the degree of similarity between two images. An improvement of accuracy by 2.7% and 9.0% has been obtained, compared to a binary correlation method and the original Hausdorff distance method, respectively.

  • PDF

The Research Trends and Keywords Modeling of Shoulder Rehabilitation using the Text-mining Technique (텍스트 마이닝 기법을 활용한 어깨 재활 연구분야 동향과 키워드 모델링)

  • Kim, Jun-hee;Jung, Sung-hoon;Hwang, Ui-jae
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.16 no.2
    • /
    • pp.91-100
    • /
    • 2021
  • PURPOSE: This study analyzed the trends and characteristics of shoulder rehabilitation research through keyword analysis, and their relationships were modeled using text mining techniques. METHODS: Abstract data of 10,121 articles in which abstracts were registered on the MEDLINE of PubMed with 'shoulder' and 'rehabilitation' as keywords were collected using python. By analyzing the frequency of words, 10 keywords were selected in the order of the highest frequency. Word-embedding was performed using the word2vec technique to analyze the similarity of words. In addition, the groups were classified and analyzed based on the distance (cosine similarity) through the t-SNE technique. RESULTS: The number of studies related to shoulder rehabilitation is increasing year after year, keywords most frequently used in relation to shoulder rehabilitation studies are 'patient', 'pain', and 'treatment'. The word2vec results showed that the words were highly correlated with 12 keywords from studies related to shoulder rehabilitation. Furthermore, through t-SNE, the keywords of the studies were divided into 5 groups. CONCLUSION: This study was the first study to model the keywords and their relationships that make up the abstracts of research in the MEDLINE of Pub Med related to 'shoulder' and 'rehabilitation' using text-mining techniques. The results of this study will help increase the diversifying research topics of shoulder rehabilitation studies to be conducted in the future.

On the Development of Risk Factor Map for Accident Analysis using Textmining and Self-Organizing Map(SOM) Algorithms (재해분석을 위한 텍스트마이닝과 SOM 기반 위험요인지도 개발)

  • Kang, Sungsik;Suh, Yongyoon
    • Journal of the Korean Society of Safety
    • /
    • v.33 no.6
    • /
    • pp.77-84
    • /
    • 2018
  • Report documents of industrial and occupational accidents have continuously been accumulated in private and public institutes. Amongst others, information on narrative-texts of accidents such as accident processes and risk factors contained in disaster report documents is gaining the useful value for accident analysis. Despite this increasingly potential value of analysis of text information, scientific and algorithmic text analytics for safety management has not been carried out yet. Thus, this study aims to develop data processing and visualization techniques that provide a systematic and structural view of text information contained in a disaster report document so that safety managers can effectively analyze accident risk factors. To this end, the risk factor map using text mining and self-organizing map is developed. Text mining is firstly used to extract risk keywords from disaster report documents and then, the Self-Organizing Map (SOM) algorithm is conducted to visualize the risk factor map based on the similarity of disaster report documents. As a result, it is expected that fruitful text information buried in a myriad of disaster report documents is analyzed, providing risk factors to safety managers.

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.

A Study on the Integration of Similar Sentences in Atomatic Summarizing of Document (자동초록 작성시에 발생하는 유사의미 문장요소들의 통합에 관한 연구)

  • Lee, Tae-Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.34 no.2
    • /
    • pp.87-115
    • /
    • 2000
  • The effects of the Case, Part of Speech, Word and Clause Location, Word Frequency etc. were studied in discriminating the similar sentences of the Korean text. Word Frequency was much related to the discrimination of similarity and Tilte word and Functional Clause were little, but the others were not. The cosine coefficient and Salton'similarity measurement are used to measure the similarity between sentences. The change of clauses between each sentence is also used to unify the similar sentences into a represenative sentence.

  • PDF

Graph based KNN for Optimizing Index of News Articles

  • Jo, Taeho
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.53-61
    • /
    • 2016
  • This research proposes the index optimization as a classification task and application of the graph based KNN. We need the index optimization as an important task for maximizing the information retrieval performance. And we try to solve the problems in encoding words into numerical vectors, such as huge dimensionality and sparse distribution, by encoding them into graphs as the alternative representations to numerical vectors. In this research, the index optimization is viewed as a classification task, the similarity measure between graphs is defined, and the KNN is modified into the graph based version based on the similarity measure, and it is applied to the index optimization task. As the benefits from this research, by modifying the KNN so, we expect the improvement of classification performance, more graphical representations of words which is inherent in graphs, the ability to trace more easily results from classifying words. In this research, we will validate empirically the proposed version in optimizing index on the two text collections: NewsPage.com and 20NewsGroups.

A Study on Extracting Car License Plate Numbers Using Image Segmentation Patterns

  • Jang, Eun-Gyeom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.10
    • /
    • pp.87-94
    • /
    • 2018
  • This paper proposes a method of detecting the license plates of vehicles. The proposed technology applicable to different formats of license plates detects the numbers by standardizing the images at edge points. Specifically, in accordance with the format of each license plate, the technology captures the image in the character segment, and compares it against the sample model to derive their similarity and identify the numbers. Characters with high similarities are used to form a group of candidates and to extract the final characters. Analyzing the experimental results found the similarity of the extracted characters exceeded 90%, whereas that of less identifiable numbers was markedly lower. Still, the accuracy of the extracted characters with the highest similarity was over 80%. The proposed technology is applicable to extracting the character patterns of certain formats in diverse and useful ways.

Experiment of tong-neck Flange Cold Forging Process Using Plasticine (플라스티신을 이용한 롱넥 플랜지 냉간 단조 공정의 모사 실험)

  • 이호용;임중연;이상돈
    • Transactions of Materials Processing
    • /
    • v.10 no.1
    • /
    • pp.67-74
    • /
    • 2001
  • The cold forging process to produce a long-neck flange is investigated by using model material test. The two stage process with optimum design condition is examined using plasticine, which is suitable to model steel at room temperature. The similarity theory is employed to estimate the forging load of each sequence by strict application of similarity condition between steel(AISI 1015) and plasticine material The model test results are compared with the simulation results and shows good agreement. The proper forging process with least forming energy can be resulted in $25^{\circ}$ of extrusion semi-die angle.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.