• 제목/요약/키워드: Web Document Analysis

Search Result 139, Processing Time 0.023 seconds

A Study on Constructing Order-Production System through Integrated E-Mail and Database (E-mail과 DB연계를 통한 주문-생산시스템 구축연구)

  • 정한욱;이창호
    • Journal of the Korea Safety Management & Science
    • /
    • v.2 no.2
    • /
    • pp.155-165
    • /
    • 2000
  • Many enterprises are performing the effective database applications with VAN(Value Added Network) or WAN(Web Added Network) . But it is very difficult and expensive. So we suggest low-cost database system within long distance area through personal computers. This system is very powerful for flexibility. It may be estimated it's value highly because they develop the program without high programming skill. This study would be used between company with company and/or between branch with branch, for example, customer claim information, inventory information, product order etc. It is important not importing document but importing data in document. Then end-user can accomplish analysis and decision-making with their own database. It would enhance productivity in many enterprises.

  • PDF

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

Managing and Modeling Strategy of Geo-features in Web-based 3D GIS

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.75-79
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3B GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a file format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(eXtensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for. users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF

Locating Text in Web Images Using Image Based Approaches (웹 이미지로부터 이미지기반 문자추출)

  • Chin, Seongah;Choo, Moonwon
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.27-39
    • /
    • 2002
  • A locating text technique capable of locating and extracting text blocks in various Web images is presented here. Until now this area of work has been ignored by researchers even if this sort of text may be meaningful for internet users. The algorithms associated with the technique work without prior knowledge of the text orientation, size or font. In the work presented in this research, our text extraction algorithm utilizes useful edge detection followed by histogram analysis on the genuine characteristics of letters defined by text clustering region, to properly perform extraction of the text region that does not depend on font styles and sizes. By a number of experiments we have showed impressively acceptable results.

  • PDF

Analysis of Consumer Awareness of Cycling Wear Using Web Mining (웹마이닝을 활용한 사이클웨어 소비자 인식 분석)

  • Kim, Chungjeong;Yi, Eunjou
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.640-649
    • /
    • 2018
  • This study analyzed the consumer awareness of cycling wear using web mining, one of the big data analysis methods. For this, the texts of postings and comments related to cycling wear from 2006 to 2017 at Naver cafe, 'people who commute by bicycle' were collected and analyzed using R packages. A total of 15,321 documents were used for data analysis. The keywords of cycling wear were extracted using a Korean morphological analyzer (KoNLP) and converted to TDM (Term Document Matrix) and co-occurrence matrix to calculate the frequency of the keywords. The most frequent keyword in cycling wear was 'tights', including the opinion that they feel embarrassed because they are too tight. When they purchase cycling wear, they appeared to consider 'price', 'size', and 'brand'. Recently 'low price' and 'cost effectiveness' have become more frequent since 2016 than before, which indicates that consumers tend to prefer practical products. Moreover, the findings showed that it is necessary to improve not only the design and wearability, but also the material functionality, such as sweat-absorbance and quick drying, and the function of pad. These showed similar results to previous studies using a questionnaire. Therefore, it is expected to be used as an objective indicator that can be reflected in product development by real-time analysis of the opinions and requirements of consumers using web mining.

Auto Detection System of Personal Information based on Images and Document Analysis (이미지와 문서 분석을 통한 개인 정보 자동 검색 시스템)

  • Cho, Jeong-Hyun;Ahn, Cheol-Woong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.5
    • /
    • pp.183-192
    • /
    • 2015
  • This paper proposes Personal Information Auto Detection(PIAD) System to prevent leakage of Personal informations in document and image files that can be used by mobile service provider. The proposed system is to automatically detect the images and documents that contain personal informations and shows the result to the user. The PIAD is divided into the selection step for fast and accurate retrieval images and analysis which is composed of SURF, erosion and dilation, FindContours algorithm. The result of proposed PIAD system showed more than 98% accuracy by selection and analysis steps, 267 images detection of 272 images.

A Multi-Agent MicroBlog Behavior based User Preference Profile Construction Approach

  • Kim, Jee-Hyun;Cho, Young-Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.29-37
    • /
    • 2015
  • Nowadays, the user-centric application based web 2.0 has replaced the web 1.0. The users gain and provide information by interactive network applications. As a result, traditional approaches that only extract and analyze users' local document operating behavior and network browsing behavior to build the users' preference profile cannot fully reflect their interests. Therefore this paper proposed a preference analysis and indicating approach based on the users' communication information from MicroBlog, such as reading, forwarding and @ behavior, and using the improved PersonalRank method to analyze the importance of a user to other users in the network and based on the users' communication behavior to update the weight of the items in the user preference. Simulation result shows that our proposed method outperforms the ontology model, TREC model, and the category model in terms of 11SPR value.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Generation, Storing and Management System for Electronic Discharge Summaries Using HL7 Clinical Document Architecture (HL7 표준임상문서구조를 사용한 전자퇴원요약의 생성, 저장, 관리 시스템)

  • Kim, Hwa-Sun;Kim, Il-Kon;Cho, Hune
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.239-249
    • /
    • 2006
  • Interoperability has been deemphasized from the hospital information system in general, because it is operated independently of other hospital information systems. This study proposes a future-oriented hospital information system through the design and actualization of the HL7 clinical document architecture. A clinical document is generated using the hospital information system by analysis and designing the clinical document architecture, after we defined the item regulations and the templates for the release form and radiation interpretation form. The schema is analyzed based on the HL7 reference information model, and HL7 interface engine ver.2.4 was used as the transmission protocol. This study has the following significance. First, an expansion and redefining process conducted, founded on the HL7 clinical document architecture and reference information model, to apply international standards to Korean contexts. Second, we propose a next-generation web based hospital information system that is based on the clinical document architecture. In conclusion, the study of the clinical document architecture will include an electronic health record (EHR) and a clinical data repository (CDR), and also make possible medical information-sharing among various healthcare institutions.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.