• Title/Summary/Keyword: web crawler

Search Result 102, Processing Time 0.036 seconds

Preliminary Performance Evaluation of a Web Crawler with Dynamic Scheduling Support (동적 스케줄링 기반 웹 크롤러의 성능분석)

  • Lee, Yong-Doo;Chae, Soo-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.3
    • /
    • pp.12-18
    • /
    • 2003
  • A web crawler is used widely in a variety of Internet applications such as search engines. As the Internet continues to grow, high performance web crawlers become more essential. Crawl scheduling which manages the allocation of web pages to each process for downloading documents is one of the important issues. In this paper, we identify issues that are important and challenging in the crawl scheduling. To address the issues, we propose a dynamic owl scheduling framework and subsequently a system architecture for a web crawler subject to the framework. This paper presents the architecture of a web crawler with dynamic scheduling support. The result of our preliminary performance evaluation made to the proposed crawler architecture is also presented.

  • PDF

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

Web Crawler Service Implementation for Information Retrieval based on Big Data Analysis (빅데이터 분석 기반의 정보 검색을 위한 웹 크롤러 서비스 구현)

  • Kim, Hye-Suk;Han, Na;Lim, Suk-Ja
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.933-942
    • /
    • 2017
  • In this paper, we propose a web crawler service method for collecting information efficiently about college students and job-seeker's external activities, competition, and scholarship. The proposed web crawler service uses Jsoup tree analysis and Json format data transmission method to avoid problems of duplicated crawling while crawling at high speed. After collecting relevant information for 24 hours, we were able to confirm that the web crawler service is running with an accuracy of 100%. It is expected that the web crawler service can be applied to various web sites in the future to improve the web crawler service.

A Web Crawler using Hyperlink Structure and Hypertext Categorization Method (Hyperlink구조와 Hypertext 분류방법을 이용한 Web Crawler)

  • Lee, Dong-Won;Hyun, Soon-J.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04b
    • /
    • pp.1337-1340
    • /
    • 2002
  • 웹 정보검색에서 웹 문서를 수집하고, 색인을 구축하는 작업에서 Web Crawler 의 역할은 매우 중요하다. 그러나, 웹 문서의 급속한 증가로 인하여 Web Crawler 가 모든 웹 문서를 수집하는 것은 불가능하며, 웹 정보검색의 정확성을 증가시키기 위한 방법으로 특정한 영역의 문서를 수집하는 focused web crawler에 대한 연구가 활발히 진행되어 왔다. 이와 함께, 웹 문서의 link구조를 이용하여 문서의 집합에서 중요한 문서를 찾는 연구들이 많이 진행되었다. 그러나, 기존의 연구에서는 문서의 link 구조에만 초점이 맞추어져 있으며, hypertext 전체의 연결 구조를 알아야 한다는 문제점이 있다. 본 연구에서는 hyperlink의 구조와 hypertext 분류방법을 이용하여 문서에 연결된 다른 문서 중 중요한 문서를 결정하는 방법을 제시하고 이를 이용한 web crawler 를 통하여 특정영역에서 정확한 문서를 수집함을 보였다.

  • PDF

Design and Implementation of a Web Crawler System for Collection of Structured and Unstructured Data (정형 및 비정형 데이터 수집을 위한 웹 크롤러 시스템 설계 및 구현)

  • Bae, Seong Won;Lee, Hyun Dong;Cho, DaeSoo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.199-209
    • /
    • 2018
  • Recently, services provided to consumers are increasingly being combined with big data such as low-priced shopping, customized advertisement, and product recommendation. With the increasing importance of big data, the web crawler that collects data from the web has also become important. However, there are two problems with existing web crawlers. First, if the URL is hidden from the link, it can not be accessed by the URL. The second is the inefficiency of fetching more data than the user wants. Therefore, in this paper, through the Casper.js which can control the DOM in the headless brwoser, DOM event is generated by accessing the URL to the hidden link. We also propose an intelligent web crawler system that allows users to make steps to fine-tune both Structured and unstructured data to bring only the data they want. Finally, we show the superiority of the proposed crawler system through the performance evaluation results of the existing web crawler and the proposed web crawler.

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

Design and Implementation of a High Performance Web Crawler (고성능 웹크롤러의 설계 및 구현)

  • 권성호;이영탁;김영준;이용두
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.4
    • /
    • pp.64-72
    • /
    • 2003
  • A Web crawler is an important Internet software technology used in a variety of Internet application software which includes search engines. As Internet continues to grow, implementations of high performance web crawlers are urgently demanded. In this paper, we study how to support dynamic scheduling for a multiprocess-based web crawler. For high peformance, web crawlers are usually based on multiprocess in their implementations. In these systems, crawl scheduling which manages the allocation of web pages to each process for loading is one of the important issues. In this paper, we identify issues which are important and challenging in the crawl scheduling. To address the issue, we propose a dynamic crawl scheduling framework and subsequently a system architecture for a web crawler with dynamic crawl scheduling support. And we analysed the behaviors of Web crawler. Based on the analysis result, we suggest the direction for the design of high performance Web crawler.

  • PDF

Comparison of Web Crawler Performance for Web Record Management (원격수집 방식의 웹기록물 관리를 위한 웹수집기 성능 비교 연구)

  • Chang, Jinho;Kwon, Hyuksang;Lee, Kyumo;Choi, Dong Joon
    • The Korean Journal of Archival Studies
    • /
    • no.74
    • /
    • pp.155-186
    • /
    • 2022
  • As of 2022, the number of Internet sites for public institutions registered on the 'Government 24' website (www.gov.kr) of the Ministry of the Interior and Safety is 17,000. The direct transfer takes a lot of human and material resources and time between the records-producing institution and the records-management institution that manages websites as records. In addition, it is practically difficult for records management institutions to migrate and operate various software and application technologies required to run each website. A method of automatically collecting websites from a remote location using web crawler software is used domestically and abroad to overcome these practical limitations. This study compared the performance of the web crawler required to collect and manage public Internet websites as records remotely. The most suitable web crawler was selected through a step-by-step review of several web crawlers from previous studies and other literature. Several public agency websites were applied to compare the actual performance of the crawlers in the evaluation process. The study provides empirical and specific performance comparison information for organizations that need to choose a web crawler.

Distribute Parallel Crawler Design and Implementation (분산형 병렬 크롤러 설계 및 구현)

  • Jang, Hyun Ho;jeon, kyung-sik;Lee, HooKi
    • Convergence Security Journal
    • /
    • v.19 no.3
    • /
    • pp.21-28
    • /
    • 2019
  • As the number of websites managed by organizations or organizations increases, so does the number of web application servers and containers. In checking the status of the web service of the web application server and the container, it is very difficult for the person to check the status of the web service after accessing the physical server at the remote site through the terminal or using other accessible software It. Previous research on crawler-related research is hard to find any reference to the processing of data from crawling. Data loss occurs when the crawler accesses the database and stores the data. In this paper, we propose a method to store the inspection data according to crawl - based web application server management without losing data.