• Title/Summary/Keyword: Document Search

Search Result 384, Processing Time 0.023 seconds

An Efficient Method of Document Store and Version Management for XML Repository System (XML 저장 관리 시스템에서 효율적인 버전 관리 및 문서 저장 방안)

  • Jung, Hyun-Joo;Kim, Kweon-Yang;Choi, Jae-Hyuk
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.4
    • /
    • pp.11-21
    • /
    • 2003
  • In rapidly changing an information=oriented society, it is essential to control massive document information by electronic file. In relation to these electronic document, it is also important to keep and maintain all kinds of information without any losses. It should be allowed to trace previous contents as well as recently updated contents by controlling updated contents with version. For these, XML is recommendable. In this thesis, we intend to save the document storing space by saving only updated contents with version without saving whole documentation, when document is updated. In case of controlling the history of document update by version, we designed system so as to omit "JOIN operation" if document size is under a certainspecific size. Therefore, we implemented a new XML document repository system which is possible for quick search and efficient XML document saving by reducing perfomance deterioration caused by JOIN operation.

  • PDF

A Two Phases Plagiarism Detection System for the Newspaper Articles by using a Web Search and a Document Similarity Estimation (웹 검색과 문서 유사도를 활용한 2 단계 신문 기사 표절 탐지 시스템)

  • Cho, Jung-Hyun;Jung, Hyun-Ki;Kim, Yu-Seop
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.181-194
    • /
    • 2009
  • With the increased interest on the document copyright, many of researches related to the document plagiarism have been done up to now. The plagiarism problem of newspaper articles has attracted much interest because the plagiarism cases of the articles having much commercial values in market are currently happened very often. Many researches related to the document plagiarism have been so hard to be applied to the newspaper articles because they have strong real-time characteristics. So to detect the plagiarism of the articles, many human detectors have to read every single thousands of articles published by hundreds of newspaper companies manually. In this paper, we firstly sorted out the articles with high possibility of being copied by utilizing OpenAPI modules supported by web search companies such as Naver and Daum. Then, we measured the document similarity between selected articles and the original article and made the system decide whether the article was plagiarized or not. In experiment, we used YonHap News articles as the original articles and we also made the system select the suspicious articles from all searched articles by Naver and Daum news search services.

Rank-Size Distribution with Web Document Frequency of City Name : Case study with U.S incorporated places of 100,000 or more population (인터넷 문서빈도를 통해 본 도시순위규모에 관한 연구 -미국 10만 이상의 인구를 갖는 도시들을 사례로-)

  • Hong, Il-Young
    • Journal of the Korean association of regional geographers
    • /
    • v.13 no.3
    • /
    • pp.290-300
    • /
    • 2007
  • In this study, web document frequency of city place name is analyzed and it is used as the dataset for rank-size analysis. The search keywords are compared in the context of spatial meaning and the different domain corpus is applied. The acquired search results are applied for the further analysis. Firstly, the rank-size analysis is applied to compare the result between population and document frequency. Secondly, in case of correlation analysis, the significant changes are revealed when the spatial criteria for search keywords are increased. In case of corpus, COM, NET, and ORG shows the higher coefficient values. Lastly, the cluster analysis is applied to classify the list of cities that shows the similarity and difference. These analyses have a significant role in representing the rank-size distribution of city names that are reflected on the web documents in the information society.

  • PDF

Page Group Search Model : A New Internet Search Model for Illegal and Harmful Content (페이지 그룹 검색 그룹 모델 : 음란성 유해 정보 색출 시스템을 위한 인터넷 정보 검색 모델)

  • Yuk, Hyeon-Gyu;Yu, Byeong-Jeon;Park, Myeong-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1516-1528
    • /
    • 1999
  • 월드 와이드 웹(World Wide Web)에 존재하는 음란성 유해 정보는 많은 국가에서 사회적인 문제를 일으키고 있다. 그러나 현재 음란성 유해 정보로부터 미성년자를 보호하는 실효성 있는 방법은 유해 정보 접근 차단 프로그램을 사용하는 방법뿐이다. 유해 정보 접근 차단 프로그램은 기본적으로 음란성 유해 정보를 포함한 유해 정보 주소 목록을 기반으로 사용자의 유해 정보에 대한 접근을 차단하는 방식으로 동작한다.그런데 대규모 유해 정보 주소 목록의 확보를 위해서는 월드 와이드 웹으로부터 음란성 유해 정보를 자동 색출하는 인터넷 정보 검색 시스템의 일종인 음란성 유해 정보 색출 시스템이 필요하다. 그런데 음란성 유해 정보 색출 시스템은 그 대상이 사람이 아닌 유해 정보 접근 차단 프로그램이기 때문에 일반 인터넷 정보 검색 시스템과는 달리, 대단히 높은 검색 정확성을 유지해야 하고, 유해 정보 접근 차단 프로그램에서 관리가 용이한 검색 목록을 생성해야 하는 요구 사항을 가진다.본 논문에서는 기존 인터넷 정보 검색 모델이 "문헌"에 대한 잘못된 가정 때문에 위 요구사항을 만족시키지 못하고 있음을 지적하고, 월드 와이드 웹 상의 문헌에 대한 새로운 정의와 이를 기반으로 위의 요구사항을 만족하는 검색 모델인 페이지 그룹 검색 모델을 제안한다. 또한 다양한 실험과 분석을 통해 제안하는 모델이 기존 인터넷 정보 검색 모델보다 높은 정확성과 빠른 검색 속도, 그리고 유해 정보 접근 차단 프로그램에서의 관리가 용이한 검색 목록을 생성함을 보인다.Abstract Illegal and Harmful Content on the Internet, especially content for adults causes a social problem in many countries. To protect children from harmful content, A filtering software, which blocks user's access to harmful content based on a blocking list, and harmful content search system, which is a special purpose internet search system to generate the blocking list, are necessary. We found that current internet search models do not satisfy the requirements of the harmful content search system: high accuracy in document analysis, fast search time, and low overhead in the filtering software.In this paper we point out these problems are caused by a mistake in a document definition of the current internet models and propose a new internet search model, Page Group Search Model. This model considers a document as a set of pages that are made for one subject. We suggest a Group Construction algorithm and a Group Evaluation algorithm. And we perform experiments to prove that Page Group Search Model satisfies the requirements.uirements.

Analysis of Preference Criteria for Personalized Web Search (개인화된 웹 검색을 위한 선호 기준 분석)

  • Lee, Soo-Jung
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.1
    • /
    • pp.45-52
    • /
    • 2010
  • With rapid increase in the number of web documents, the problem of information overload in Internet search is growing seriously. In order to improve web search results, previous research studies employed user queries/preferred words and the number of links in the web documents. In this study, performance of the search results exploiting these two criteria is examined and other preference criteria for web documents are analyzed. Experimental results show that personalized web search results employing queries and preferred words yield up to 1.7 times better performance over the current search engine and that the search results using the number of links gives up to 1.3 times better performance. Although it is found that the first of the user's preference criteria for web documents is the contents of the document, readability and images in the document are also given a large weight. Therefore, performance of web search personalization algorithms will be greatly improved if they incorporate objective data reflecting each user's characteristics in addition to the number of queries and preferred words.

  • PDF

Legal search method using S-BERT

  • Park, Gil-sik;Kim, Jun-tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.57-66
    • /
    • 2022
  • In this paper, we propose a legal document search method that uses the Sentence-BERT model. The general public who wants to use the legal search service has difficulty searching for relevant precedents due to a lack of understanding of legal terms and structures. In addition, the existing keyword and text mining-based legal search methods have their limits in yielding quality search results for two reasons: they lack information on the context of the judgment, and they fail to discern homonyms and polysemies. As a result, the accuracy of the legal document search results is often unsatisfactory or skeptical. To this end, This paper aims to improve the efficacy of the general public's legal search in the Supreme Court precedent and Legal Aid Counseling case database. The Sentence-BERT model embeds contextual information on precedents and counseling data, which better preserves the integrity of relevant meaning in phrases or sentences. Our initial research has shown that the Sentence-BERT search method yields higher accuracy than the Doc2Vec or TF-IDF search methods.

Known-Item Retrieval Performance of a PICO-based Medical Question Answering Engine

  • Vong, Wan-Tze;Then, Patrick Hang Hui
    • Asia pacific journal of information systems
    • /
    • v.25 no.4
    • /
    • pp.686-711
    • /
    • 2015
  • The performance of a novel medical question-answering engine called CliniCluster and existing search engines, such as CQA-1.0, Google, and Google Scholar, was evaluated using known-item searching. Known-item searching is a document that has been critically appraised to be highly relevant to a therapy question. Results show that, using CliniCluster, known-items were retrieved on average at rank 2 ($MRR@10{\approx}0.50$), and most of the known-items could be identified from the top-10 document lists. In response to ill-defined questions, the known-items were ranked lower by CliniCluster and CQA-1.0, whereas for Google and Google Scholar, significant difference in ranking was not found between well- and ill-defined questions. Less than 40% of the known-items could be identified from the top-10 documents retrieved by CQA-1.0, Google, and Google Scholar. An analysis of the top-ranked documents by strength of evidence revealed that CliniCluster outperformed other search engines by providing a higher number of recent publications with the highest study design. In conclusion, the overall results support the use of CliniCluster in answering therapy questions by ranking highly relevant documents in the top positions of the search results.

An Implementation of XML document searching system based on Structure and Semantics Similarity (구조와 내용 유사도에 기반한 XML 웹 문서 검색시스템 구축)

  • Park Uchang;Seo Yeojin
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.99-115
    • /
    • 2005
  • Extensible Markup Language (XML) is an Internet standard that is used to express and convert data, In order to find the necessary information out of XML documents, you need a search system for XML documents, In this research, we have developed a search system that can find documents that matches the structure and content of a given XML document, making the best use of XML structure, Search metrics take account of the similarity in tag names, tag values, and the structure of tags, After a search, the system displays the ranked results in the order of aggregate similarity, Three methods of query are provided: keyword search which is conventional; search with tag names and their values; and search with XML documents, These three methods enable users to choose the method that best suits their preference, resulting in the increase of the usefulness of the system.

  • PDF

A Research on User′s Query Processing in Search Engine for Ocean using the Association Rules (연관 규칙 탐사 기법을 이용한 해양 전문 검색 엔진에서의 질의어 처리에 관한 연구)

  • 하창승;윤병수;류길수
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.266-272
    • /
    • 2002
  • Recently various of information suppliers provide information via WWW so the necessary of search engine grows larger. However the efficiency of most search engines is low comparatively because of using simple pattern match technique between user's query and web document. And a manifest contents of query for special expert field so much worse A specialized search engine returns the specialized information depend on each user's search goal. It is trend to develop specialized search engines in many countries. For example, in America, there are a site that searches only the recently updated headline news and the federal law and the government and and so on. However, most such engines don't satisfy the user's needs. This paper proposes the specialized search engine for ocean information that uses user's query related with ocean and search engine uses the association rules in web data mining. So specialized search engine for ocean provides more information related to ocean because of raising recall about user's query

  • PDF

A performance improvement methodology of web document clustering using FDC-TCT (FDC-TCT를 이용한 웹 문서 클러스터링 성능 개선 기법)

  • Ko, Suc-Bum;Youn, Sung-Dae
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.637-646
    • /
    • 2005
  • There are various problems while applying classification or clustering algorithm in that document classification which requires post processing or classification after getting as a web search result due to my keyword. Among those, two problems are severe. The first problem is the need to categorize the document with the help of the expert. And, the second problem is the long processing time the document classification takes. Therefore we propose a new method of web document clustering which can dramatically decrease the number of times to calculate a document similarity using the Transitive Closure Tree(TCT) and which is able to speed up the processing without loosing the precision. We also compare the effectivity of the proposed method with those existing algorithms and present the experimental results.