• Title/Summary/Keyword: Web Document Analysis

Search Result 139, Processing Time 0.022 seconds

Analysis of Access Authorization Conflict for Partial Information Hiding of RDF Web Document (RDF 웹 문서의 부분적인 정보 은닉과 관련한 접근 권한 충돌 문제의 분석)

  • Kim, Jae-Hoon;Park, Seog
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.2
    • /
    • pp.49-63
    • /
    • 2008
  • RDF is the base ontology model which is used in Semantic Web defined by W3C. OWL expands the RDF base model by providing various vocabularies for defining much more ontology relationships. Recently Jain and Farkas have suggested an RDF access control model based on RDF triple. Their research point is to introduce an authorization conflict problem by RDF inference which must be considered in RDF ontology data. Due to the problem, we cannot adopt XML access control model for RDF, although RDF is represented by XML. However, Jain and Farkas did not define the authorization propagation over the RDF upper/lower ontology concepts when an RDF authorization is specified. The reason why the authorization specification should be defined clearly is that finally, the authorizatin conflict is the problem between the authorization propagation in specifying an authorization and the authorization propagation in inferencing authorizations. In this article, first we define an RDF access authorization specification based on RDF triple in detail. Next, based on the definition, we analyze the authoriztion conflict problem by RDF inference in detail. Next, we briefly introduce a method which can quickly find an authorization conflict by using graph labeling techniques. This method is especially related with the subsumption relationship based inference. Finally, we present a comparison analysis with Jain and Farkas' study, and some experimental results showing the efficiency of the suggested conflict detection method.

Korea National College of Agriculture and Fisheries in Naver News by Web Crolling : Based on Keyword Analysis and Semantic Network Analysis (웹 크롤링에 의한 네이버 뉴스에서의 한국농수산대학 - 키워드 분석과 의미연결망분석 -)

  • Joo, J.S.;Lee, S.Y.;Kim, S.H.;Park, N.B.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.23 no.2
    • /
    • pp.71-86
    • /
    • 2021
  • This study was conducted to find information on the university's image from words related to 'Korea National College of Agriculture and Fisheries (KNCAF)' in Naver News. For this purpose, word frequency analysis, TF-IDF evaluation and semantic network analysis were performed using web crawling technology. In word frequency analysis, 'agriculture', 'education', 'support', 'farmer', 'youth', 'university', 'business', 'rural', 'CEO' were important words. In the TF-IDF evaluation, the key words were 'farmer', 'dron', 'agricultural and livestock food department', 'Jeonbuk', 'young farmer', 'agriculture', 'Chonju', 'university', 'device', 'spreading'. In the semantic network analysis, the Bigrams showed high correlations in the order of 'youth' - 'farmer', 'digital' - 'agriculture', 'farming' - 'settlement', 'agriculture' - 'rural', 'digital' - 'turnover'. As a result of evaluating the importance of keywords as five central index, 'agriculture' ranked first. And the keywords in the second place of the centrality index were 'farmers' (Cc, Cb), 'education' (Cd, Cp) and 'future' (Ce). The sperman's rank correlation coefficient by centrality index showed the most similar rank between Degree centrality and Pagerank centrality. The KNCAF articles of Naver News were used as important words such as 'agriculture', 'education', 'support', 'farmer', 'youth' in terms of word frequency. However, in the evaluation including document frequency, the words such as 'farmer', 'dron', 'Ministry of Agriculture, Food and Rural Affairs', 'Jeonbuk', and 'young farmers' were found to be key words. The centrality analysis considering the network connectivity between words was suitable for evaluation by Cd and Cp. And the words with strong centrality were 'agriculture', 'education', 'future', 'farmer', 'digital', 'support', 'utilization'.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Forgery Protection System and 2D Bar-code inserted Watermark (워터마크가 삽입된 이차원 바코드와 위.변조 방지 시스템)

  • Lee, Sang-Kyung;Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.825-830
    • /
    • 2010
  • Generally, the copy protection mark and 2D bar-code techniques are widely used for forgery protection in printed public documents. But, it is hard to discriminate truth from the copy documents by using exisiting methods, because of that existing 2D-barcode is separated from the copy protection mark and it can be only recognized by specified optical barcord scanner. Therefor, in this paper, we proposed the forgery protection tehchnique for discriminating truth from the copy document by using watermark inserted 2D-barcord, which can be accurately distinguished not only by naked eye, but also by scanner. The copy protection mark consists of deformed patterns that are caused by the lowpass filter characteristic of digital I/O device. From these, we verified the performance of the proposed techniques by applying the histogram analysis based on the original, copy, and scanned copy image of the printed documents. Also, we suggested 2D-barcord confirmation system which can be accessed through the online server by using certification key data which is detected by web-camera, cell phone camera.

Adjusting Edit Scripts on Tree-structured Documents (트리구조의 문서에 대한 편집스크립트 조정)

  • Lee, SukKyoon;Um, HyunMin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.2
    • /
    • pp.1-14
    • /
    • 2019
  • Since most documents used in web, XML, office applications are tree-structured, diff, merge, and version control for tree-structured documents in multi-user environments are crucial tasks. However research on edit scripts which is a basis for them is in primitive stage. In this paper, we present a document model for understanding the change of tree-structured documents as edit scripts are executed, and propose a method of switching adjacent edit operations on tree-structured documents based on the analysis of the effects of edit operations. Mostly, edit scripts which are produced as the results of diff on tree-structured documents only consist of basic operations such as update, insert, delete. However, when move and copy are included in edit scripts, because of the characteristics of their complex operation, it is often that edit scripts are generated to execute in two passes. In this paper, using the proposed method of switching edit operations, we present an algorithm of transforming the edit scripts of X-treeESgen, which are designed to execute in two passes, into the ones that can be executed in one pass.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Development of Extreme Event Analysis Tool Base on Spatial Information Using Climate Change Scenarios (기후변화 시나리오를 활용한 공간정보 기반 극단적 기후사상 분석 도구(EEAT) 개발)

  • Han, Kuk-Jin;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.3
    • /
    • pp.475-486
    • /
    • 2020
  • Climate change scenarios are the basis of research to cope with climate change, and consist of large-scale spatio-temporal data. From the data point of view, one scenario has a large capacity of about 83 gigabytes or more, and the data format is semi-structured, making it difficult to utilize the data through means such as search, extraction, archiving and analysis. In this study, a tool for analyzing extreme climate events based on spatial information is developed to improve the usability of large-scale, multi-period climate change scenarios. In addition, a pilot analysis is conducted on the time and space in which the heavy rain thresholds that occurred in the past can occur in the future, by applying the developed tool to the RCP8.5 climate change scenario. As a result, the days with a cumulative rainfall of more than 587.6 mm over three days would account for about 76 days in the 2080s, and localized heavy rains would occur. The developed analysis tool was designed to facilitate the entire process from the initial setting through to deriving analysis results on a single platform, and enabled the results of the analysis to be implemented in various formats without using specific commercial software: web document format (HTML), image (PNG), climate change scenario (ESR), statistics (XLS). Therefore, the utilization of this analysis tool is considered to be useful for determining future prospects for climate change or vulnerability assessment, etc., and it is expected to be used to develop an analysis tool for climate change scenarios based on climate change reports to be presented in the future.

WordNet-Based Category Utility Approach for Author Name Disambiguation (저자명 모호성 해결을 위한 개념망 기반 카테고리 유틸리티)

  • Kim, Je-Min;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.225-232
    • /
    • 2009
  • Author name disambiguation is essential for improving performance of document indexing, retrieval, and web search. Author name disambiguation resolves the conflict when multiple authors share the same name label. This paper introduces a novel approach which exploits ontologies and WordNet-based category utility for author name disambiguation. Our method utilizes author knowledge in the form of populated ontology that uses various types of properties: titles, abstracts and co-authors of papers and authors' affiliation. Author ontology has been constructed in the artificial intelligence and semantic web areas semi-automatically using OWL API and heuristics. Author name disambiguation determines the correct author from various candidate authors in the populated author ontology. Candidate authors are evaluated using proposed WordNet-based category utility to resolve disambiguation. Category utility is a tradeoff between intra-class similarity and inter-class dissimilarity of author instances, where author instances are described in terms of attribute-value pairs. WordNet-based category utility has been proposed to exploit concept information in WordNet for semantic analysis for disambiguation. Experiments using the WordNet-based category utility increase the number of disambiguation by about 10% compared with that of category utility, and increase the overall amount of accuracy by around 98%.

Risk of Breast Cancer and Total Malignancies in Rheumatoid Arthritis Patients Undergoing TNF-α Antagonist Therapy: a Meta-analysis of Randomized Control Trials

  • Liu, Yang;Fan, Wei;Chen, Hao;Yu, Ming-Xia
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.8
    • /
    • pp.3403-3410
    • /
    • 2014
  • Context: Interest exits in whether TNF-alpha antagonists increase the risk of breast cancer and total malignancies in patients with rheumatoid arthritis (RA). Objectives: To analyze the risk of malignancies, especially breast cancer, in patients with RA enrolled in randomized control trials (RCTs). Methods: A systematic literature search for RCTs from 1 January 1998 to 1 July 2013 from online databases, such as PubMed, WILEY, EMBASE, ISI web of knowledge and Cochrane Library was conducted. Studies included RCTs that compared the safety of at least one dose of the five TNF-${\alpha}$ antagonists with placebo or methotrexate (MTX) (or TNF-${\alpha}$ antagonists plus MTX vs placebo plus MTX) in RA patients for more than 24 weeks and imported all the references into document management software EndNote${\times}6$. Two independent reviewers selected studies and extracted the data about study design, patients' characteristics and the type, number of all malignancies. Results: 28 RCTs from 34 records with 11,741 patients were analyzed. Of the total, 97 developed at least one malignancy during the double-blind trials, and breast cancer was observed in 17 patients (17.5% of total malignancies). However, there was no statistically significant increased risk observed in either the per protocol (PP) model (OR 0.65, 95%CI [0.22, 1.93]) or the modified intention to treat (mITT) model (OR 0.75, 95%CI [0.25, 2.21]). There were also no significant trend for increased risk of total malignancies on anti-TNF-${\alpha}$ therapy administered at approved doses in either model (OR, 1.06, 95%CI [0.64, 1.75], and OR, 1.30, 95%CI [0.80, 2.14], respectively). As to the two models, modified intention to treat model analysis led to higher estimation than per protocol model analysis. Conclusions: This study did not find a significantly increased risk of breast cancer and total malignancies in adults RA patients treated with TNF-${\alpha}$ antagonists at approved doses. However, it cannot be ignored that more patients developed malignancies with TNF-${\alpha}$ antagonists therapy compared with patients with placebo or MTX, in spite of the lack of statistical significance, so that more strict clinical trials and long-term follow-up are needed, and both mITT and PP analyses should be used in such safety analyses.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.