• Title/Summary/Keyword: Web Crawling

Search Result 177, Processing Time 0.024 seconds

A proposal on a proactive crawling approach with analysis of state-of-the-art web crawling algorithms (최신 웹 크롤링 알고리즘 분석 및 선제적인 크롤링 기법 제안)

  • Na, Chul-Won;On, Byung-Won
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 2019
  • Today, with the spread of smartphones and the development of social networking services, structured and unstructured big data have stored exponentially. If we analyze them well, we will get useful information to be able to predict data for the future. Large amounts of data need to be collected first in order to analyze big data. The web is repository where these data are most stored. However, because the data size is large, there are also many data that have information that is not needed as much as there are data that have useful information. This has made it important to collect data efficiently, where data with unnecessary information is filtered and only collected data with useful information. Web crawlers cannot download all pages due to some constraints such as network bandwidth, operational time, and data storage. This is why we should avoid visiting many pages that are not relevant to what we want and download only important pages as soon as possible. This paper seeks to help resolve the above issues. First, We introduce basic web-crawling algorithms. For each algorithm, the time-complexity and pros and cons are described, and compared and analyzed. Next, we introduce the state-of-the-art web crawling algorithms that have improved the shortcomings of the basic web crawling algorithms. In addition, recent research trends show that the web crawling algorithms with special purposes such as collecting sentiment words are actively studied. We will one of the introduce Sentiment-aware web crawling techniques that is a proactive web crawling technique as a study of web crawling algorithms with special purpose. The result showed that the larger the data are, the higher the performance is and the more space is saved.

A Method of Efficient Web Crawling Using URL Pattern Scripts (URL 패턴 스크립트를 이용한 효율적인 웹문서 수집 방안)

  • Chang, Moon-Soo;Jung, June-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.849-854
    • /
    • 2007
  • It is difficult that we collect only target documents from the Innumerable Web documents. One of solution to the problem is that we select target documents on the Web site which services many documents of target domain. In this paper, we will propose an intelligent crawling method collecting needed documents based on URL pattern script defined by XML. Proposed crawling method will efficiently apply to the sites which service structuralized information of a piece with database. In this paper, we collected 50 thousand Web documents using our crawling method.

Enhancing Similar Business Group Recommendation through Derivative Criteria and Web Crawling

  • Min Jeong LEE;In Seop NA
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2809-2821
    • /
    • 2023
  • Effective recommendation of similar business groups is a critical factor in obtaining market information for companies. In this study, we propose a novel method for enhancing similar business group recommendation by incorporating derivative criteria and web crawling. We use employment announcements, employment incentives, and corporate vocational training information to derive additional criteria for similar business group selection. Web crawling is employed to collect data related to the derived criteria from 'credit jobs' and 'worknet' sites. We compare the efficiency of different datasets and machine learning methods, including XGBoost, LGBM, Adaboost, Linear Regression, K-NN, and SVM. The proposed model extracts derivatives that reflect the financial and scale characteristics of the company, which are then incorporated into a new set of recommendation criteria. Similar business groups are selected using a Euclidean distance-based model. Our experimental results show that the proposed method improves the accuracy of similar business group recommendation. Overall, this study demonstrates the potential of incorporating derivative criteria and web crawling to enhance similar business group recommendation and obtain market information more efficiently.

Comparison and Application of Dynamic and Static Crawling for Extracting Product Data from Web Pages (웹페이지에서의 상품 데이터 추출을 위한 동적, 정적 크롤링 비교 및 활용)

  • Sang-Hyuk Kim;Jeong-Hoon Kim;Seung-Dae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1277-1284
    • /
    • 2023
  • In this paper, a web page that is easy for consumers to access event products in progress at convenience stores was created. In the production process, static crawling and dynamic crawling, two crawling methods for extracting data from event products, were compared and used. Static crawling is an extraction method of collecting static data from a homepage, and dynamic crawling is a method of collecting data from pages dynamically generated from a web page. Through the comparison of the two crawlings, we studied which crawl method is more effective in extracting event product data. Among them, a web page was created using effective static crawling, and 1+1 and 2+1 products were categorized and a search function was added to create a web page.

Web System Development base on Java Web Crawling of the Spring Framework (Spring Framework를 활용한 Java Web Crawling 웹 시스템 개발)

  • Cho, Kyu Cheol;Ha, Jin Uk;Lyu, Sung Min
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.241-244
    • /
    • 2017
  • 인터넷을 이용하는 사용자들은 원하는 정보를 획득하고 타인들과 소통하기 위한 방법으로 소셜 네트워크 서비스를 이용한다. SNS는 사용자별로 차별화된 기능을 제공함으로써 수요자를 증가시키지만 이를 활용하는 사용자들은 무분별한 콘텐츠를 접함으로써 사용자 인터페이스에 대한 불편함은 더해가고 있다. 본 연구는 SNS를 이용하는 사용자들의 사용자 편이성을 증가하고 콘텐츠 접근성을 강화하는 방안으로 원하는 관심사만 자동으로 수집하여 열람하도록 JAVA WEB CRAWLING을 활용하여 시스템을 개발하였다.

  • PDF

An Implementation and Performance Evaluation of Fast Web Crawler with Python

  • Kim, Cheong Ghil
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.3
    • /
    • pp.140-143
    • /
    • 2019
  • The Internet has been expanded constantly and greatly such that we are having vast number of web pages with dynamic changes. Especially, the fast development of wireless communication technology and the wide spread of various smart devices enable information being created at speed and changed anywhere, anytime. In this situation, web crawling, also known as web scraping, which is an organized, automated computer system for systematically navigating web pages residing on the web and for automatically searching and indexing information, has been inevitably used broadly in many fields today. This paper aims to implement a prototype web crawler with Python and to improve the execution speed using threads on multicore CPU. The results of the implementation confirmed the operation with crawling reference web sites and the performance improvement by evaluating the execution speed on the different thread configurations on multicore CPU.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

Design and Implementation of Event-driven Real-time Web Crawler to Maintain Reliability (신뢰성 유지를 위한 이벤트 기반 실시간 웹크롤러의 설계 및 구현)

  • Ahn, Yong-Hak
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.1-6
    • /
    • 2022
  • Real-time systems using web cralwing data must provide users with data from the same database as remote data. To do this, the web crawler repeatedly sends HTTP(HtypeText Transfer Protocol) requests to the remote server to see if the remote data has changed. This process causes network load on the crawling server and remote server, causing problems such as excessive traffic generation. To solve this problem, in this paper, based on user events, we propose a real-time web crawling technique that can reduce the overload of the network while securing the reliability of maintaining the sameness between the data of the crawling server and data from multiple remote locations. The proposed method performs a crawling process based on an event that requests unit data and list data. The results show that the proposed method can reduce the overhead of network traffic in existing web crawlers and secure data reliability. In the future, research on the convergence of event-based crawling and time-based crawling is required.

WCTT: Web Crawling System based on HTML Document Formalization (WCTT: HTML 문서 정형화 기반 웹 크롤링 시스템)

  • Kim, Jin-Hwan;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.495-502
    • /
    • 2022
  • Web crawler, which is mainly used to collect text on the web today, is difficult to maintain and expand because researchers must implement different collection logic by collection channel after analyzing tags and styles of HTML documents. To solve this problem, the web crawler should be able to collect text by formalizing HTML documents to the same structure. In this paper, we designed and implemented WCTT(Web Crawling system based on Tag path and Text appearance frequency), a web crawling system that collects text with a single collection logic by formalizing HTML documents based on tag path and text appearance frequency. Because WCTT collects texts with the same logic for all collection channels, it is easy to maintain and expand the collection channel. In addition, it provides the preprocessing function that removes stopwords and extracts only nouns for keyword network analysis and so on.

Refresh Cycle Optimization for Web Crawlers (웹크롤러의 수집주기 최적화)

  • Cho, Wan-Sup;Lee, Jeong-Eun;Choi, Chi-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.30-39
    • /
    • 2013
  • Web crawler should maintain fresh data with minimum server overhead for large amount of data in the web sites. The overhead in the server increases rapidly as the amount of data is exploding as in the big data era. The amount of web information is increasing rapidly with advanced wireless networks and emergence of diverse smart devices. Furthermore, the information is continuously being produced and updated in anywhere and anytime by means of easy web platforms, and smart devices. Now, it is becoming a hot issue how frequently updated web data has to be refreshed in data collection and integration. In this paper, we propose dynamic web-data crawling methods, which include sensitive checking of web site changes, and dynamic retrieving of web pages from target web sites based on historical update patterns. Furthermore, we implemented a Java-based web crawling application and compared efficiency between conventional static approaches and our dynamic one. Our experiment results showed 46.2% overhead benefits with more fresh data compared to the static crawling methods.