• Title/Summary/Keyword: Knowledge extraction

Search Result 383, Processing Time 0.032 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

MRSPAKE : A Web-Scale Spatial Knowledge Extractor Using Hadoop MapReduce (MRSPAKE : Hadoop MapReduce를 이용한 웹 규모의 공간 지식 추출기)

  • Lee, Seok-Jun;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.569-584
    • /
    • 2016
  • In this paper, we present a spatial knowledge extractor implemented in Hadoop MapReduce parallel, distributed computing environment. From a large spatial dataset, this knowledge extractor automatically derives a qualitative spatial knowledge base, which consists of both topological and directional relations on pairs of two spatial objects. By using R-tree index and range queries over a distributed spatial data file on HDFS, the MapReduce-enabled spatial knowledge extractor, MRSPAKE, can produce a web-scale spatial knowledge base in highly efficient way. In experiments with the well-known open spatial dataset, Open Street Map (OSM), the proposed web-scale spatial knowledge extractor, MRSPAKE, showed high performance and scalability.

Development of a Decision Support System Shell for Problem Structuring (문제구조화를 위한 의사결정지원시스템츠 쉘의 개발)

  • 이재식;박동진
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.19 no.3
    • /
    • pp.15-40
    • /
    • 1994
  • We designed a knowledge-based decision support system for structuring semi-or unstructured problems. Problem structuring involves extraction of the relevant factors from the identified problem, and model construction that represents the relationships among those factors. In this research, we employed a directed graph called Influence Deiagram as a tool for problem structuring. In particular, our proposed system is designed as a shell. Therefore, a decision maker can change the content of the knowledge base to suit his/her own interested domain.

  • PDF

A Generalized Method for Extracting Characters and Video Captions (일반화된 문자 및 비디오 자막 영역 추출 방법)

  • Chun, Byung-Tae;Bae, Young-Lae;Kim, Tai-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.632-641
    • /
    • 2000
  • Conventional character extraction methods extract character regions using methods such as color reduction, region split and merge and texture analysis from the whole image. Because these methods use many heuristic variables and thresholding values derived from a priori knowledge, it is difficult to generalize them algorithmically. In this paper, we propose a method that can extract character regions using a topographical feature extraction method and a point-line-region extension method. The proposed method can also solve the problems of conventional methods by reducing heuristic variables and generalizing thresholding values. We see that character regions can be extracted by generalized variables and thresolding values without using a priori knowledge of character region. Experimental results show that the candidate region extraction rate is 100%, and the character region extraction rate is over 98%.

  • PDF

The Technology of the Audio Feature Extraction for Classifying Contents (콘덴츠 분류를 위한 오디오 신호 특징 추출 기술)

  • Lim, J.D.;Han, S.W.;Choi, B.C.;Chung, B.H.
    • Electronics and Telecommunications Trends
    • /
    • v.24 no.6
    • /
    • pp.121-132
    • /
    • 2009
  • 음성을 비롯하여 음악, 음향 등을 포함하는 오디오 신호는 멀티미디어 콘텐츠를 구성하는 매우 중요한 미디어 타입이며, 미디어 기록 매체와 네트워크의 발전으로 인한 데이터 양의 급격한 증대는 수동적 관리의 어려움을 유발하게 되고, 이로 인해 오디오 신호를 자동으로 구분하는 기술은 매우 중요한 기술로 인식되고 있다. 다양한 오디오 신호를 분류하기 위한 오디오 신호의 특징을 추출하는 기술은 많은 연구들을 통해 발전하여 왔으며, 본 논문은 오디오 콘텐츠 자동 분류에서 높은 성능을 갖는 오디오 신호 특징 추출에 대해서 분석한다. 그리고 특징 분류기 중에서 안정적인 성능을 가지는 SVM을 사용한 오디오 신호 분류 방법을 알아본다.

Hiding Sensitive Frequent Itemsets by a Border-Based Approach

  • Sun, Xingzhi;Yu, Philip S.
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.1
    • /
    • pp.74-94
    • /
    • 2007
  • Nowadays, sharing data among organizations is often required during the business collaboration. Data mining technology has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that non-sensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. Unlike previous work, we consider the quality of the sanitized database especially on preserving the non-sensitive frequent itemsets. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach.

Company Name Discrimination in Tweets using Topic Signatures Extracted from News Corpus

  • Hong, Beomseok;Kim, Yanggon;Lee, Sang Ho
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.4
    • /
    • pp.128-136
    • /
    • 2016
  • It is impossible for any human being to analyze the more than 500 million tweets that are generated per day. Lexical ambiguities on Twitter make it difficult to retrieve the desired data and relevant topics. Most of the solutions for the word sense disambiguation problem rely on knowledge base systems. Unfortunately, it is expensive and time-consuming to manually create a knowledge base system, resulting in a knowledge acquisition bottleneck. To solve the knowledge-acquisition bottleneck, a topic signature is used to disambiguate words. In this paper, we evaluate the effectiveness of various features of newspapers on the topic signature extraction for word sense discrimination in tweets. Based on our results, topic signatures obtained from a snippet feature exhibit higher accuracy in discriminating company names than those from the article body. We conclude that topic signatures extracted from news articles improve the accuracy of word sense discrimination in the automated analysis of tweets.

A Study on Information Extraction Using Event Template (이벤트 템플릿을 이용한 정보 추출에 관한 연구)

  • Lim, Soo-Jong;Chung, Eui-Sok;Hwang, Yi-Gyu;Yoon, Bo-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.585-588
    • /
    • 2002
  • 본 논문에서는 개체형 인식이 된 일반 문서에서 정보 추출을 하기 위하여 이벤트 템플릿 구조를 사용하는 방법을 제안한다. 제한된 도메인 지식을 주로 사용하는 기존의 정보 추출 방법과 달리 predicate-argument 구조를 갖는 이벤트 템플릿은 일반적인 지식을 주로 사용하여 정보 추출을 한다. 이벤트 템플릿을 추출하기 위해서는 형태소 분석 결과 용언의 하위범주 정보를 이용하고 이벤트 템플릿의 논항 구조를 이용하여 필요시 이벤트 템플릿을 통합한다. 문서에서 생성된 일반적인 이벤트 템플릿은 정보수용자의 요구에 맞는 도메인 지식을 사용하여 최종적인 결과를 생성한다. 이벤트 템플릿을 사용하는 정보 추출 실험 결과는 제한된 도메인 정보를 사용하는 시스템에 비해 정확율은 떨어지지만 기존 정보 추출시스템의 문제인 이식성을 높일 수 있다.

  • PDF

A Better Prediction for Higher Education Performance using the Decision Tree

  • Hilal, Anwar;Zamani, Abu Sarwar;Ahmad, Sultan;Rizwanullah, Mohammad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.209-213
    • /
    • 2021
  • Data mining is the application of specific algorithms for extracting patterns from data and KDD is the automated or convenient extraction of patterns representing knowledge implicitly stored or captured in large databases, data warehouses, the Web, other massive information repositories or data streams. Data mining can be used for decision making in educational system. But educational institution does not use any knowledge discovery process approach on these data; this knowledge can be used to increase the quality of education. The problem was happening in the educational management system, but to make education system more flexible and discover knowledge from it huge data, we will use data mining techniques to solve problem.

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.