• Title/Summary/Keyword: We Crawler

Search Result 81, Processing Time 0.033 seconds

Preliminary Performance Evaluation of a Web Crawler with Dynamic Scheduling Support (동적 스케줄링 기반 웹 크롤러의 성능분석)

  • Lee, Yong-Doo;Chae, Soo-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.3
    • /
    • pp.12-18
    • /
    • 2003
  • A web crawler is used widely in a variety of Internet applications such as search engines. As the Internet continues to grow, high performance web crawlers become more essential. Crawl scheduling which manages the allocation of web pages to each process for downloading documents is one of the important issues. In this paper, we identify issues that are important and challenging in the crawl scheduling. To address the issues, we propose a dynamic owl scheduling framework and subsequently a system architecture for a web crawler subject to the framework. This paper presents the architecture of a web crawler with dynamic scheduling support. The result of our preliminary performance evaluation made to the proposed crawler architecture is also presented.

  • PDF

Analysis of Roller Load by Boom Length and Rotation Angle of a Crawler Crane (크롤러 크레인의 붐 길이 선회각도에 의한 롤러 하중 해석)

  • Lee, Deukki;Kang, Jungho;Kim, Taehyun;Oh, Chulkyu;Kim, Jongmin;Kim, Jongmyeong
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.3
    • /
    • pp.83-91
    • /
    • 2021
  • A crawler crane, which consists of a lattice boom, a driving system, and a movable vehicle, is widely used on construction sites. The crawler crane often traverses rough terrain at these sites; as a result, an overload limiter needs to be installed on the crane to prevent it from overturning and breaking. In this paper, we studied the distributed load change in relation to boom length and the angle of rotation of the roller that comes in direct contact with the grounded track shoe. First, we developed a 3D model of a crawler crane and meshed it for finite elements. Then, we performed finite element analysis to derive the load on the roller. Finally, we graphed and examined the roller distributed load data of the case according to boom length and rotation angle. By detecting the load on the roller of the crawler crane, we can predict the potential for the crane to overturn before it happens.

Design and Implementation of a High Performance Web Crawler (고성능 웹크롤러의 설계 및 구현)

  • 권성호;이영탁;김영준;이용두
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.4
    • /
    • pp.64-72
    • /
    • 2003
  • A Web crawler is an important Internet software technology used in a variety of Internet application software which includes search engines. As Internet continues to grow, implementations of high performance web crawlers are urgently demanded. In this paper, we study how to support dynamic scheduling for a multiprocess-based web crawler. For high peformance, web crawlers are usually based on multiprocess in their implementations. In these systems, crawl scheduling which manages the allocation of web pages to each process for loading is one of the important issues. In this paper, we identify issues which are important and challenging in the crawl scheduling. To address the issue, we propose a dynamic crawl scheduling framework and subsequently a system architecture for a web crawler with dynamic crawl scheduling support. And we analysed the behaviors of Web crawler. Based on the analysis result, we suggest the direction for the design of high performance Web crawler.

  • PDF

Design and Implementation of a Web Crawler System for Collection of Structured and Unstructured Data (정형 및 비정형 데이터 수집을 위한 웹 크롤러 시스템 설계 및 구현)

  • Bae, Seong Won;Lee, Hyun Dong;Cho, DaeSoo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.199-209
    • /
    • 2018
  • Recently, services provided to consumers are increasingly being combined with big data such as low-priced shopping, customized advertisement, and product recommendation. With the increasing importance of big data, the web crawler that collects data from the web has also become important. However, there are two problems with existing web crawlers. First, if the URL is hidden from the link, it can not be accessed by the URL. The second is the inefficiency of fetching more data than the user wants. Therefore, in this paper, through the Casper.js which can control the DOM in the headless brwoser, DOM event is generated by accessing the URL to the hidden link. We also propose an intelligent web crawler system that allows users to make steps to fine-tune both Structured and unstructured data to bring only the data they want. Finally, we show the superiority of the proposed crawler system through the performance evaluation results of the existing web crawler and the proposed web crawler.

Web Crawler Service Implementation for Information Retrieval based on Big Data Analysis (빅데이터 분석 기반의 정보 검색을 위한 웹 크롤러 서비스 구현)

  • Kim, Hye-Suk;Han, Na;Lim, Suk-Ja
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.933-942
    • /
    • 2017
  • In this paper, we propose a web crawler service method for collecting information efficiently about college students and job-seeker's external activities, competition, and scholarship. The proposed web crawler service uses Jsoup tree analysis and Json format data transmission method to avoid problems of duplicated crawling while crawling at high speed. After collecting relevant information for 24 hours, we were able to confirm that the web crawler service is running with an accuracy of 100%. It is expected that the web crawler service can be applied to various web sites in the future to improve the web crawler service.

Wrapper-based Economy Data Collection System Design And Implementation (래퍼 기반 경제 데이터 수집 시스템 설계 및 구현)

  • Piao, Zhegao;Gu, Yeong Hyeon;Yoo, Seong Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.227-230
    • /
    • 2015
  • For analyzing and prediction of economic trends, it is necessary to collect particular economic news and stock data. Typical Web crawler to analyze the page content, collects document and extracts URL automatically. On the other hand there are forms of crawler that can collect only document of a particular topic. In order to collect economic news on a particular Web site, we need to design a crawler which could directly analyze its structure and gather data from it. The wrapper-based web crawler design is required. In this paper, we design a crawler wrapper for Economic news analysis system based on big data and implemented to collect data. we collect the data which stock data, sales data from USA auto market since 2000 with wrapper-based crawler. USA and South Korea's economic news data are also collected by wrapper-based crawler. To determining the data update frequency on the site. And periodically updated. We remove duplicate data and build a structured data set for next analysis. Primary to remove the noise data, such as advertising and public relations, etc.

  • PDF

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.

Studies on Design Theories of the Rubber Crawler for a Farm Machinery

  • Matsuo, T.;Inaba, S.;Sakai, J.;Inoue, E.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.1202-1211
    • /
    • 1993
  • The authors propose in this research the equation to calculate the velocities, accelerations and penetration angles of the locus of lug motion for the rubber crawler mechanism. In these equations with some values of factors, motion characteristics of all points or faces of the lug in the front-half a rubber crawler. After that we also consider the reactionary force from the soil to the lug by computing the removed soil area for the purpose of understanding a relation between crawler lug and the soil in the terms of estimating trafficability.

  • PDF

An Automatic and Scalable Application Crawler for Large-Scale Mobile Internet Content Retrieval

  • Huang, Mingyi;Lyu, Yongqiang;Yin, Hao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4856-4872
    • /
    • 2018
  • The mobile internet has grown ubiquitous across the globe with the widespread use of smart devices. However, the designs of modern mobile operating systems and their applications limit content retrieval with mobile applications. The mobile internet is not as accessible as the traditional web, having more man-made restrictions and lacking a unified approach for crawling and content retrieval. In this study, we propose an automatic and scalable mobile application content crawler, which can recognize the interaction paths of mobile applications, representing them as interaction graphs and automatically collecting content according to the graphs in a parallel manner. The crawler was verified by retrieving content from 50 non-game applications from the Google Play Store using the Android platform. The experiment showed the efficiency and scalability potential of our crawler for large-scale mobile internet content retrieval.

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF