• Title/Summary/Keyword: Crawler

Search Result 199, Processing Time 0.026 seconds

Analysis of Roller Load by Boom Length and Rotation Angle of a Crawler Crane (크롤러 크레인의 붐 길이 선회각도에 의한 롤러 하중 해석)

  • Lee, Deukki;Kang, Jungho;Kim, Taehyun;Oh, Chulkyu;Kim, Jongmin;Kim, Jongmyeong
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.3
    • /
    • pp.83-91
    • /
    • 2021
  • A crawler crane, which consists of a lattice boom, a driving system, and a movable vehicle, is widely used on construction sites. The crawler crane often traverses rough terrain at these sites; as a result, an overload limiter needs to be installed on the crane to prevent it from overturning and breaking. In this paper, we studied the distributed load change in relation to boom length and the angle of rotation of the roller that comes in direct contact with the grounded track shoe. First, we developed a 3D model of a crawler crane and meshed it for finite elements. Then, we performed finite element analysis to derive the load on the roller. Finally, we graphed and examined the roller distributed load data of the case according to boom length and rotation angle. By detecting the load on the roller of the crawler crane, we can predict the potential for the crane to overturn before it happens.

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.

Development of a Crawler Type Vehicle to Travel in Water Paddy Rice Field for Water-Dropwort Harvest

  • Jun, Hyeon-Jong;Kang, Tae-Gyoung;Choi, Yong;Choi, Il-Su;Choi, Duck-Kyu;Lee, Choung-Keun
    • Journal of Biosystems Engineering
    • /
    • v.38 no.4
    • /
    • pp.240-247
    • /
    • 2013
  • Purpose: This study was conducted to develop a rubber-crawler type vehicle as a traveling device for harvesting water-dropwort cultivated in water contained paddy rice field in winter season. Methods: A commercial rubber-crawler type vehicle was used to investigate application of rubber crawler to the paddy rice field as preliminary test. As the result of the preliminary test, a both prototype traveling device with rubber crawlers for a water-dropwort harvest was designed with inclination of $45^{\circ}$ at the front-end and rear-end of crawler under the basic water depth of 0.6 m in the paddy rice field. The device was fabricated and attached to the experimental harvesting test devices on the front of the prototype vehicle. The size of the prototype crawler vehicle with a harvesting part is $2,800{\times}1,460{\times}1,040 $ (mm) ($L{\times}W{\times}H$) with weight of 9.21 kN (maximum). Sizes of the crawler of prototype vehicle are ground contact length of 900 mm, width of 180 mm, height of 1,070 mm and distance between center to center of crawlers of 720 mm. The side-overturn angle of the prototype was $26.4^{\circ}$. Results: Driving performance of the prototype vehicle in water contained paddy field were good at both forward and reverse (backward) directions as weights were applied. The drawbar pull and the maximum sinking depth of the prototype vehicle were 3.5 kN and 0.13 m respectively at water depth of 0.5 m, when the weight and bearing capacity of the prototype rubber crawler in the paddy field were 8.51 kN and 26.3 $kN/m^2$, respectively. Conclusions: Results of the driving test performance of the prototype crawler in paddy rice field at the water depth of 0.5 m were satisfactory. The prototype had enough drawbar pull and driving ability in the deep water contained paddy field.

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Wrapper-based Economy Data Collection System Design And Implementation (래퍼 기반 경제 데이터 수집 시스템 설계 및 구현)

  • Piao, Zhegao;Gu, Yeong Hyeon;Yoo, Seong Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.227-230
    • /
    • 2015
  • For analyzing and prediction of economic trends, it is necessary to collect particular economic news and stock data. Typical Web crawler to analyze the page content, collects document and extracts URL automatically. On the other hand there are forms of crawler that can collect only document of a particular topic. In order to collect economic news on a particular Web site, we need to design a crawler which could directly analyze its structure and gather data from it. The wrapper-based web crawler design is required. In this paper, we design a crawler wrapper for Economic news analysis system based on big data and implemented to collect data. we collect the data which stock data, sales data from USA auto market since 2000 with wrapper-based crawler. USA and South Korea's economic news data are also collected by wrapper-based crawler. To determining the data update frequency on the site. And periodically updated. We remove duplicate data and build a structured data set for next analysis. Primary to remove the noise data, such as advertising and public relations, etc.

  • PDF

ANALYTICAL SIMULATION OF TRAVEL RESISTANCE OF THE RUBBER CRAWLER SYSTEM FOR FARM MACHINERY

  • Inaba, S.;Inoue, E.;Hashiguchi, K.;Matsuo, T.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.139-145
    • /
    • 2000
  • The mechanism of the inner resistance in a rubber crawler system has been investigated to reduce the power requirement (Kitano et al. 1994). The rolling resistance of the track roller, which is one of the major inner resistances, was measured for seven different vertical loads. The rolling resistance changed periodically and could be classified into three types. In case of the vertical load less than 500N, the rolling resistance was almost constant. For the vertical load greater than 500N, the maximum value of the rolling resistance increased. Further more in case of the vertical load greater than 1200N, negative resistance appeared. Analytical simulation of the travel resistance based on experimental results and static equilibrium equations derived from three-dimension mechanical model for the rubber crawler system. It was found that the simulation method was carried out to evaluate the travel resistance occurred by the rolling resistance of the track roller. The rolling resistance for each track roller arrangement and effects of the lug phase in the right and left rubber crawler could be estimated quantitatively.

  • PDF

Distribute Parallel Crawler Design and Implementation (분산형 병렬 크롤러 설계 및 구현)

  • Jang, Hyun Ho;jeon, kyung-sik;Lee, HooKi
    • Convergence Security Journal
    • /
    • v.19 no.3
    • /
    • pp.21-28
    • /
    • 2019
  • As the number of websites managed by organizations or organizations increases, so does the number of web application servers and containers. In checking the status of the web service of the web application server and the container, it is very difficult for the person to check the status of the web service after accessing the physical server at the remote site through the terminal or using other accessible software It. Previous research on crawler-related research is hard to find any reference to the processing of data from crawling. Data loss occurs when the crawler accesses the database and stores the data. In this paper, we propose a method to store the inspection data according to crawl - based web application server management without losing data.

Effect of labor saving by crawler-type truck in steep slope orchards

  • Tsurusaki, T.;Yamashita, J.;Imoto, T.;Satou, K.;Hikita, M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1580-1584
    • /
    • 1991
  • The purpose of the present study is to investigate, from the viewpoint of labor science, the effect of labor saving by crawler-type truck, which has been used for the rationalization of transportation labor in the citrus orchard on steep slops, and to find out how effectively to utilize the crawler-type truck. In order to attain the purpose mentioned above, portable heart rate memory for measuring physical response of laborer was taken, and the experiment was carried out in the citrus orchard on steep slopes in Japan.

  • PDF

Focused Crawler using Ontology and Sentence Analysis (문장 분석 및 온톨로지를 이용한 Focused Crawler)

  • 최광복;김현주;강진범;홍광희;양재영;최중민
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.100-102
    • /
    • 2004
  • 월드 와이드 웹의 보편화로 인하여 급속하게 증가하고 변화하는 웹 문서는 검색엔진으로 하여금 색인된 웹 문서와 현재의 웹 문서의 일관성을 유지할 수 없을 정도이다. 이러한 문제를 해결하기 위한 방법으로 연구되고 있는 것이 특정한 주제를 정하고 정해진 주제에 관련된 문서를 수집할 수 있는 focused crawler가 제시되고 있다. 지금까지 다양한 접근방법의 focused crawler가 개발되었지만, 모두 웹 링크를 이용하여 연결되어 있는 문서를 평가하는 처리과정을 거치고 있다. 그러나 이러한 과정은 다양한 내용을 포함하고 있는 문서일 경우 관련내용이 존재함에도 문서가 버려지거나 사용되더라도 문서상의 모든 링크를 사용하여 처리하는 비효율적인 문제점이 발생한다. 이 논문에서는 웰 문서 내부에 포함되어 있는 정보를 온톨로지를 이용하여 평가함으로써 다양한 내용을 가진 문서에서 사용자가 원하는 정보를 찾을 수 있을 뿐만 아니라 정보와 관련된 링크만을 사용하여 보다 효율적이고 정확한 문서를 수집하고자 한다.

  • PDF