• 제목/요약/키워드: Text processing

검색결과 1,197건 처리시간 0.074초

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • 제25권2호
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • 제25권1호
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Alleviating Semantic Term Mismatches in Korean Information Retrieval (한국어 정보 검색에서 의미적 용어 불일치 완화 방안)

  • Yun, Bo-Hyun;Park, Sung-Jin;Kang, Hyun-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • 제7권12호
    • /
    • pp.3874-3884
    • /
    • 2000
  • An information retrieval system has to retrieve all and only documents which are relevant to a user query, even if index terms and query terms are not matched exactly. However, term mismatches between index terms and qucry terms have been a serious obstacle to the enhancement of retrieval performance. In this paper, we discuss automatic term normalization between words in text corpora and their application to a Korean information retrieval system. We perform two types of term normalizations to alleviate semantic term mismatches: equivalence class and co-occurrence cluster. First, transliterations, spelling errors, and synonyms are normalized into equivalence classes bv using contextual similarity. Second, context-based terms are normalized by using a combination of mutual information and word context to establish word similarities. Next, unsupervised clustering is done by using K-means algorithm and co-occurrence clusters are identified. In this paper, these normalized term products are used in the query expansion to alleviate semantic tem1 mismatches. In other words, we utilize two kinds of tcrm normalizations, equivalence class and co-occurrence cluster, to expand user's queries with new tcrms, in an attempt to make user's queries more comprehensive (adding transliterations) or more specific (adding spc'Cializationsl. For query expansion, we employ two complementary methods: term suggestion and term relevance feedback. The experimental results show that our proposed system can alleviatl' semantic term mismatches and can also provide the appropriate similarity measurements. As a result, we know that our system can improve the rctrieval efficiency of the information retrieval system.

  • PDF

XML Document Editing System for Structural Processing of the Digital Document to Including Mathematical Formula (수식을 포함한 전자문헌의 구조적 처리를 위한 XML 문서편집시스템)

  • 윤화묵;유범종;김창수;정회경
    • Journal of the Korean Society for information Management
    • /
    • 제19권4호
    • /
    • pp.96-111
    • /
    • 2002
  • A lot of accumulated data of many quantity exist within a institution or an organization, but most data is remained in form of standardization as each institution or organization. There are difficulty in exchange and share of information. New concept of knowledge information resource management to overcome this disadvantage was introduced, and the digitization of knowledge information resources to share and manage accumulated data is been doing. Specially, in science technic or education scholarship it, the tendency that importing XML to process necessary data to exchange and share of knowledge information resources structurally, and limitation of back for search and indexing or reusability is happened according as expression of great many mathematics used inside electron document of these sphere is processed to nonstructural data of image or text and so on. There is interest converged in processing of mathematics that use MathML to overcome this, and we require the solution to be able to process MathML easily and efficiently on structural document. In this paper, designed and implemented of XML document editing system which easy structural process of electronic document for knowledge information resources, and create and express MathML easily on structural document without expert knowledge about MathML.

Multi-stage News Classification System for Predicting Stock Price Changes (주식 가격 변동 예측을 위한 다단계 뉴스 분류시스템)

  • Paik, Woo-Jin;Kyung, Myoung-Hyoun;Min, Kyung-Soo;Oh, Hye-Ran;Lim, Cha-Mi;Shin, Moon-Sun
    • Journal of the Korean Society for information Management
    • /
    • 제24권2호
    • /
    • pp.123-141
    • /
    • 2007
  • It has been known that predicting stock price is very difficult due to a large number of known and unknown factors and their interactions, which could influence the stock price. However, we started with a simple assumption that good news about a particular company will likely to influence its stock price to go up and vice versa. This assumption was verified to be correct by manually analyzing how the stock prices change after the relevant news stories were released. This means that we will be able to predict the stock price change to a certain degree if there is a reliable method to classify news stories as either favorable or unfavorable toward the company mentioned in the news. To classify a large number of news stories consistently and rapidly, we developed and evaluated a natural language processing based multi-stage news classification system, which categorizes news stories into either good or bad. The evaluation result was promising as the automatic classification led to better than chance prediction of the stock price change.

Design for Database Retrieval System using Virtual Database in Intranet (인트라넷에서 가상데이터베이스를이용한 데이터베이스 검색 시스템의 설계)

  • Lee, Dong-Wook;Park, Young-Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • 제5권6호
    • /
    • pp.1404-1417
    • /
    • 1998
  • Currently, there exists two different methods for database retrieval in the internet. First is to use the search engine and the second is to use the plug-in or ActiveX technology, If a search engine, which makes use of indices built from keywords of simple text data in order to do a search, is used when accessing a database, first it is not possible to access more than one database at a time, second it is also not possible to support various conditional retrievals as in using query language, and third the set of data received might include many unwanted data, in other words, precision rate might be relatively low. Plug in or Active technology make use of Web browset to execute chents' query in order to do a database retrieval. Problems associated with this is that it is not possible to activate more than one DBMS simultaneously even if they are of the same data model. sefond it is not possible to execute a user query other than the ones thai arc previou sly defined by the client program In this paper, to resolve those aforementioned problems we design and implement database retrieval system using a virtual database, which makes it possible to provide direct query jntertacc through the conventional Web browser. We assume that the virtual database is designed and aggregated from more than one relational database using the same data model.

  • PDF

A Design and Implementation of a Content_Based Image Retrieval System using Color Space and Keywords (칼라공간과 키워드를 이용한 내용기반 화상검색 시스템 설계 및 구현)

  • Kim, Cheol-Ueon;Choi, Ki-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • 제4권6호
    • /
    • pp.1418-1432
    • /
    • 1997
  • Most general content_based image retrieval techniques use color and texture as retrieval indices. In color techniques, color histogram and color pair based color retrieval techniques suffer from a lack of spatial information and text. And This paper describes the design and implementation of content_based image retrieval system using color space and keywords. The preprocessor for image retrieval has used the coordinate system of the existing HSI(Hue, Saturation, Intensity) and preformed to split One image into chromatic region and achromatic region respectively, It is necessary to normalize the size of image for 200*N or N*200 and to convert true colors into 256 color. Two color histograms for background and object are used in order to decide on color selection in the color space. Spatial information is obtained using a maximum entropy discretization. It is possible to choose the class, color, shape, location and size of image by using keyword. An input color is limited by 15 kinds keyword of chromatic and achromatic colors of the Korea Industrial Standards. Image retrieval method is used as the key of retrieval properties in the similarity. The weight values of color space ${\alpha}(%)and\;keyword\;{\beta}(%)$ can be chosen by the user in inputting the query words, controlling the values according to the properties of image_contents. The result of retrieval in the test using extracted feature such as color space and keyword to the query image are lower that those of weight value. In the case of weight value, the average of te measuring parameters shows approximate Precision(0.858), Recall(0.936), RT(1), MT(0). The above results have proved higher retrieval effects than the content_based image retrieval by using color space of keywords.

  • PDF

The Identification Framework for source code author using Authorship Analysis and CNN (작성자 분석과 CNN을 적용한 소스 코드 작성자 식별 프레임워크)

  • Shin, Gun-Yoon;Kim, Dong-Wook;Hong, Sung-sam;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • 제19권5호
    • /
    • pp.33-41
    • /
    • 2018
  • Recently, Internet technology has developed, various programs are being created and therefore various codes are being made through many authors. On this aspect, some author deceive a program or code written by other particular author as they make it themselves and use other writers' code indiscriminately, or not indicating the exact code which has been used. Due to this makes it more and more difficult to protect the code. In this paper, we propose author identification framework using Authorship Analysis theory and Natural Language Processing(NLP) based on Convolutional Neural Network(CNN). We apply Authorship Analysis theory to extract features for author identification in the source code, and combine them with the features being used text mining to perform author identification using machine learning. In addition, applying CNN based natural language processing method to source code for code author classification. Therefore, we propose a framework for the identification of authors using the Authorship Analysis theory and the CNN. In order to identify the author, we need special features for identifying the authors only, and the NLP method based on the CNN is able to apply language with a special system such as source code and identify the author. identification accuracy based on Authorship Analysis theory is 95.1% and identification accuracy applied to CNN is 98%.

Unstructured Data Analysis using Equipment Check Ledger: A Case Study in Telecom Domain (장비점검 일지의 비정형 데이터분석을 통한 고장 대응 효율화 사례 연구)

  • Ju, Yeonjin;Kim, Yoosin;Jeong, Seung Ryul
    • Journal of Internet Computing and Services
    • /
    • 제21권1호
    • /
    • pp.127-135
    • /
    • 2020
  • As the importance of the use and analysis of big data is emerging, there is a growing interest in natural language processing techniques for unstructured data such as news articles and comments. Particularly, as the collection of big data becomes possible, data mining techniques capable of pre-processing and analyzing data are emerging. In this case study with a telecom company, we propose a methodology how to formalize unstructured data using text mining. The domain is determined as equipment failure and the data is about 2.2 million equipment check ledger data. Data on equipment failures by 800,000 per year is accumulated in the equipment check ledger. The equipment check ledger coexist with both formal and unstructured data. Although formal data can be easily used for analysis, unstructured data is difficult to be used immediately for analysis. However, in unstructured data, there is a high possibility that important information. Because it can be contained that is not written in a formal. Therefore, in this study, we study to develop digital transformation method for unstructured data in equipment check ledger.

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • 제25권4호
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.