• Title/Summary/Keyword: Analysis of Query

Search Result 457, Processing Time 0.029 seconds

A Cyclic Sliced Partitioning Method for Packing High-dimensional Data (고차원 데이타 패킹을 위한 주기적 편중 분할 방법)

  • 김태완;이기준
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.122-131
    • /
    • 2004
  • Traditional works on indexing have been suggested for low dimensional data under dynamic environments. But recent database applications require efficient processing of huge sire of high dimensional data under static environments. Thus many indexing strategies suggested especially in partitioning ones do not adapt to these new environments. In our study, we point out these facts and propose a new partitioning strategy, which complies with new applications' requirements and is derived from analysis. As a preliminary step to propose our method, we apply a packing technique on the one hand and exploit observations on the Minkowski-sum cost model on the other, under uniform data distribution. Observations predict that unbalanced partitioning strategy may be more query-efficient than balanced partitioning strategy for high dimensional data. Thus we propose our method, called CSP (Cyclic Spliced Partitioning method). Analysis on this method explicitly suggests metrics on how to partition high dimensional data. By the cost model, simulations, and experiments, we show excellent performance of our method over balanced strategy. By experimental studies on other indices and packing methods, we also show the superiority of our method.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

An Opinionated Document Retrieval System based on Hybrid Method (혼합 방식에 기반한 의견 문서 검색 시스템)

  • Lee, Seung-Wook;Song, Young-In;Rim, Hae-Chang
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.115-129
    • /
    • 2008
  • Recently, as its growth and popularization, the Web is changed into the place where people express, share and debate their opinions rather than the space of information seeking. Accordingly, the needs for searching opinions expressed in the Web are also increasing. However, it is difficult to meet these needs by using a classical information retrieval system that only concerns the relevance between the user's query and documents. Instead, a more advanced system that captures subjective information through documents is required. The proposed system effectively retrieves opinionated documents by utilizing an existing information retrieval system. This paper proposes a kind of hybrid method which can utilize both a dictionary-based opinion analysis technique and a machine learning based opinion analysis technique. Experimental results show that the proposed method is effective in improving the performance.

A Study on the Intellectual Structure of Metadata Research by Using Co-word Analysis (동시출현단어 분석에 기반한 메타데이터 분야의 지적구조에 관한 연구)

  • Choi, Ye-Jin;Chung, Yeon-Kyoung
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.63-83
    • /
    • 2016
  • As the usage of information resources produced in various media and forms has been increased, the importance of metadata as a tool of information organization to describe the information resources becomes increasingly crucial. The purposes of this study are to analyze and to demonstrate the intellectual structure in the field of metadata through co-word analysis. The data set was collected from the journals which were registered in the Core collection of Web of Science citation database during the period from January 1, 1998 to July 8, 2016. Among them, the bibliographic data from 727 journals was collected using Topic category search with the query word 'metadata'. From 727 journal articles, 410 journals with author keywords were selected and after data preprocessing, 1,137 author keywords were extracted. Finally, a total of 37 final keywords which had more than 6 frequency were selected for analysis. In order to demonstrate the intellectual structure of metadata field, network analysis was conducted. As a result, 2 domains and 9 clusters were derived, and intellectual relations among keywords from metadata field were visualized, and proposed keywords with high global centrality and local centrality. Six clusters from cluster analysis were shown in the map of multidimensional scaling, and the knowledge structure was proposed based on the correlations among each keywords. The results of this study are expected to help to understand the intellectual structure of metadata field through visualization and to guide directions in new approaches of metadata related studies.

An Analysis of High School Korean Language Instruction Regarding Universal Design for Learning: Social Big Data Analysis and Survey Analysis (보편적 학습설계 측면에서의 고등학교 국어과 교수 실태: 소셜 빅데이터 및 설문조사 분석)

  • Shin, Mikyung;Lee, Okin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.326-337
    • /
    • 2020
  • This study examined the public interest in high school Korean language instruction and the universal design for learning (UDL) using the social big data analysis method. The observations from 10,339 search results led to the conclusion that public interest in UDL was significantly lower than that of high school Korean language instruction. The results of the Big Data Association analysis showed that 17.22% of the terms were found to be related to "curriculum." In addition, a survey was conducted on a total of 330 high school students to examine how their teachers apply UDL in the classroom. High school students perceived computers as the most frequently used technology tool in daily classes (38.79%). Teacher-led lectures (52.12%) were the most frequently observed method of instruction. Compared to the second-year and third-year students, the first-year students appreciated the usage of technology tools and various instruction mediums more frequently (ps<.05). Students were relatively more positive in their response to the query on the provision of multiple means of representation. Consequently, the lesson contents became easier to understand for students with the availability of various study methods and materials. The first-year students were generally more positive towards teachers' incorporation of UDL.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Study on Methods for Sasang Constituion Diagnosis (사상체질진단 방법론 연구)

  • Kim Jon-Won;Lee Eui-Ju;Kim Kyn-Kon;Kim Jong-Yeol;Lee Yong-Tae
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.19 no.6
    • /
    • pp.1471-1474
    • /
    • 2005
  • Sasang constitution medicine is to do different treatment accordining to sasang constitution. Therefore, the constitution diagnosis in the Sasang constitution medicine is very important thing. The Process of Sasang constitution diagnosis Is difficult thing, because of consuming much time, making every effort. It is apt to be subjective tendency. So it need to make objective method. The QSCC II (Questionnaire of Sasang Constitution Classification II ) have several problems- can't do diagnosis of Taeyangin, the accuracy rate of Sasang constitution diagnosis is not high (probably 60%), and so on. So, we need the new methods for the Sasang constitution Diagnosis. We will modify the problems of QSCC II. The First is the problems of the study execution process, not-multicenter study, a low data, the absent of Taeyangin cases. So, we have to do the multicenter study. The Second is the problems of a query and the method of statistics analysis. We will modify the problems of self-report Questionnaire. That is the problems of self-report Questionnaire, the lack of objective estimation( body type, personal appearance, etc), the absent of the estimation on typical or non-typical type constitution. We modified the problems of QSCC II. Therefore we made the new self-report Questionnaire for patients. We modified the problems of self-report Questionnaire. Therefore we made the new Constituion diagnosis Questionnaire for doctors. We develop the Questionnaire of two ways for the Sasang constitution Diagnosis. The one is the new self-report Questionnaire for patients. The other is the new Constitution diagnosis Questionnaire for doctors. We have to melt down the Questionnaire of two ways for the Sasang constitution Diagnosis.

Hazelcast Vs. Ignite: Opportunities for Java Programmers

  • Maxim, Bartkov;Tetiana, Katkova;S., Kruglyk Vladyslav;G., Murtaziev Ernest;V., Kotova Olha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.406-412
    • /
    • 2022
  • Storing large amounts of data has always been a big problem from the beginning of computing history. Big Data has made huge advancements in improving business processes by finding the customers' needs using prediction models based on web and social media search. The main purpose of big data stream processing frameworks is to allow programmers to directly query the continuous stream without dealing with the lower-level mechanisms. In other words, programmers write the code to process streams using these runtime libraries (also called Stream Processing Engines). This is achieved by taking large volumes of data and analyzing them using Big Data frameworks. Streaming platforms are an emerging technology that deals with continuous streams of data. There are several streaming platforms of Big Data freely available on the Internet. However, selecting the most appropriate one is not easy for programmers. In this paper, we present a detailed description of two of the state-of-the-art and most popular streaming frameworks: Apache Ignite and Hazelcast. In addition, the performance of these frameworks is compared using selected attributes. Different types of databases are used in common to store the data. To process the data in real-time continuously, data streaming technologies are developed. With the development of today's large-scale distributed applications handling tons of data, these databases are not viable. Consequently, Big Data is introduced to store, process, and analyze data at a fast speed and also to deal with big users and data growth day by day.

PC-SAN: Pretraining-Based Contextual Self-Attention Model for Topic Essay Generation

  • Lin, Fuqiang;Ma, Xingkong;Chen, Yaofeng;Zhou, Jiajun;Liu, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3168-3186
    • /
    • 2020
  • Automatic topic essay generation (TEG) is a controllable text generation task that aims to generate informative, diverse, and topic-consistent essays based on multiple topics. To make the generated essays of high quality, a reasonable method should consider both diversity and topic-consistency. Another essential issue is the intrinsic link of the topics, which contributes to making the essays closely surround the semantics of provided topics. However, it remains challenging for TEG to fill the semantic gap between source topic words and target output, and a more powerful model is needed to capture the semantics of given topics. To this end, we propose a pretraining-based contextual self-attention (PC-SAN) model that is built upon the seq2seq framework. For the encoder of our model, we employ a dynamic weight sum of layers from BERT to fully utilize the semantics of topics, which is of great help to fill the gap and improve the quality of the generated essays. In the decoding phase, we also transform the target-side contextual history information into the query layers to alleviate the lack of context in typical self-attention networks (SANs). Experimental results on large-scale paragraph-level Chinese corpora verify that our model is capable of generating diverse, topic-consistent text and essentially makes improvements as compare to strong baselines. Furthermore, extensive analysis validates the effectiveness of contextual embeddings from BERT and contextual history information in SANs.

Identification and Pharmacological Analysis of High Efficacy Small Molecule Inhibitors of EGF-EGFR Interactions in Clinical Treatment of Non-Small Cell Lung Carcinoma: a Computational Approach

  • Gudala, Suresh;Khan, Uzma;Kanungo, Niteesh;Bandaru, Srinivas;Hussain, Tajamul;Parihar, MS;Nayarisseri, Anuraj;Mundluru, Hema Prasad
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.18
    • /
    • pp.8191-8196
    • /
    • 2016
  • Inhibition of EGFR-EGF interactions forms an important therapeutic rationale in treatment of non-small cell lung carcinoma. Established inhibitors have been successful in reducing proliferative processes observed in NSCLC, however patients suffer serious side effects. Considering the narrow therapeutic window of present EGFR inhibitors, the present study centred on identifying high efficacy EGFR inhibitors through structure based virtual screening strategies. Established inhibitors - Afatinib, Dacomitinib, Erlotinib, Lapatinib, Rociletinib formed parent compounds to retrieve similar compounds by linear fingerprint based tanimoto search with a threshold of 90%. The compounds (parents and respective similars) were docked at the EGF binding cleft of EGFR. Patch dock supervised protein-protein interactions were established between EGF and ligand (query and similar) bound and free states of EGFR. Compounds ADS103317, AKOS024836912, AGN-PC-0MXVWT, GNF-Pf-3539, SCHEMBL15205939 were retrieved respectively similar to Afatinib, Dacomitinib, Erlotinib, Lapatinib, Rociletinib. Compound-AGN-PC-0MXVWT akin to Erlotinib showed highest affinity against EGFR amongst all the compounds (parent and similar) assessed in the study. Further, AGN-PC-0MXVWT brought about significant blocking of EGFR-EGF interactions in addition showed appreciable ADMET properties and pharmacophoric features. In the study, we report AGN-PC-0MXVWT to be an efficient and high efficacy inhibitor of EGFR-EGF interactions identified through computational approaches.