• Title/Summary/Keyword: Crawler

Search Result 199, Processing Time 0.031 seconds

Implementation and Performance Aanalysis of Efficient Big Data Processing System Through Dynamic Configuration of Edge Server Computing and Storage Modules (BigCrawler: 엣지 서버 컴퓨팅·스토리지 모듈의 동적 구성을 통한 효율적인 빅데이터 처리 시스템 구현 및 성능 분석)

  • Kim, Yongyeon;Jeon, Jaeho;Kang, Sungjoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.259-266
    • /
    • 2021
  • Edge Computing enables real-time big data processing by performing computing close to the physical location of the user or data source. However, in an edge computing environment, various situations that affect big data processing performance may occur depending on temporary service requirements or changes of physical resources in the field. In this paper, we proposed a BigCrawler system that dynamically configures the computing module and storage module according to the big data collection status and computing resource usage status in the edge computing environment. And the feature of big data processing workload according to the arrangement of computing module and storage module were analyzed.

Development of Hydraulic Device Performance Test Equipment Automation Process (유압 디바이스 성능 검사 장비 자동화 공정 개발)

  • Kim, Hong-Rok;Chung, Won-Jee;Seol, Sang-Seok;Park, Sang-Hyeok;Lee, Kyeong-Tae
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.19 no.10
    • /
    • pp.74-80
    • /
    • 2020
  • Crawler-type hydraulic devices facilitate forward and backward driving of construction equipment by converting power into mechanical energy. The existing hydraulic device performance test process is time- and labor-intensive. This study aims to improve efficiency and productivity by automating the hydraulic device production performance test processes, which have been separately conducted so far. We also used SolidWorksⓇ, a 3D modeling program, and ANSYSⓇ, a structural analysis tool, for structural analysis and to verify the suitability of fixing pins required for connecting a hydraulic device to performance test equipment. Our results that employing an automated hydraulic device performance test process improves efficiency.

A Basic Study of Crane Trajectory Distance Calculation for Sustainable PC Members Erection of Large Logistic Building (대형물류센터 PC부재 양중을 위한 크레인 궤적거리 산정 기초 연구)

  • Lim, Jeeyoung;Oh, Jinhyuk;Kim, Sunkuk
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.77-78
    • /
    • 2023
  • As large logistics buildings have high floor heights and long spans, these buildings are designed as PC structures, and large cranes are used to lift PC members. PC erection planning can generally cause errors depending on the field engineer's experience. To solve this problem, a basic analysis method is needed to establish a systematic PC member erection plan. Crane work can be minimized if the trajectory is easily and quickly calculated according to the location of the crane and applied to the site. Therefore, the objective of this study is a basic study of crane trajectory distance calculation for sustainable PC members erection of large logistic building. In this study, a crawler crane commonly used for lifting PC members is limited. The trajectory distance for the PC erection plan was automatically calculated using the algorithm.

  • PDF

RSS Channel Recommendation System using Focused Crawler (주제 중심 수집기를 이용한 RSS 채널 추천 시스템)

  • Lee, Young-Seok;Cho, Jung-Woo;Kim, Jun-Il;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.52-59
    • /
    • 2006
  • Recently, the internet has seen tremendous growth with plenty of enriched information due to an increasing number of specialized personal interests and popularizations of private cyber space called, blog. Many of today's blog provide internet users, RSS, which is also hewn as the syndication technology. It enables blog users to receive update automatically by registering their RSS channel address with RSS aggregator. In other words, it keeps internet users wasting their time checking back the web site for update. This paper propose the ways to manage RSS Channel Searching Crawler and collected RSS Channels for internet users to search for a specific RSS channel of their want without any obstacles. At the same time. This paper proposes RSS channel ranking based on user popularity. So, we focus on an idea of adding index to information and web update for users to receive appropriate information according to user property.

STUDIES ON VIBRATION CHARACTERISTICS OF THE RUBBER CRAWLER --- Dynamic characteristics of the fixed track rollers and movable track rollers ---

  • Kashima, Jun;Inoue, Eiji;Inaba, Shigeki;Sakai, Jun;Kim, Young-Keun
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.1186-1195
    • /
    • 1993
  • The Japanese type combine harvester has adopted rubber crawlers for the driving mechanism from first production . However, combine harvesters with movable track rollers in the rubber crawler system have been adopted recently for the purpose of stability at the time of climbing over the footpaths between rice fields, as the results of the machines becoming large. However, the dynamic characteristics of movable track rollers have not been clarified. For this reason, the design of movable track rollers depends on trial and errors. It is known that vibration characteristics of the vehicle with movable track rollers are different from the vibration characteristics of the vehicles with fixed track rollers even though the track roller arrangements are the same. Therefore, the theoretical analyses of movable track rollers must be hurried in order to formulate a reasonable track roller arrangement design. the authors have studied the vibration characteristics of the rubber crawler ve icle with fixed track rollers. in this study, the dynamic model of the vehicle with movable track roller sis compared with the dynamic model of the vehicle with fixed track rollers. Next, motions are simulated to analyze the movable track rollers by expanding the motion equation which were constructed for the dynamic model of the fixed track rollers.

  • PDF

Crawlers and Morphological Analyzers Utilize to Identify Personal Information Leaks on the Web System (크롤러와 형태소 분석기를 활용한 웹상 개인정보 유출 판별 시스템)

  • Lee, Hyeongseon;Park, Jaehee;Na, Cheolhun;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.559-560
    • /
    • 2017
  • Recently, as the problem of personal information leakage has emerged, studies on data collection and web document classification have been made. The existing system judges only the existence of personal information, and there is a problem in that unnecessary data is not filtered because classification of documents published by the same name or user is not performed. In this paper, we propose a system that can identify the types of data or homonyms using the crawler and morphological analyzer for solve the problem. The user collects personal information on the web through the crawler. The collected data can be classified through the morpheme analyzer, and then the leaked data can be confirmed. Also, if the system is reused, more accurate results can be obtained. It is expected that users will be provided with customized data.

  • PDF

Design and Implementation of Web Crawler Wrappers to Collect User Reviews on Shopping Mall with Various Hierarchical Tree Structure (다양한 계층 트리 구조를 갖는 쇼핑몰 상에서의 상품평 수집을 위한 웹 크롤러 래퍼의 설계 및 구현)

  • Kang, Han-Hoon;Yoo, Seong-Joon;Han, Dong-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.318-325
    • /
    • 2010
  • In this study, the wrapper database description language and model is suggested to collect product reviews from Korean shopping malls with multi-layer structures and are built in a variety of web languages. Above all, the wrapper based web crawlers have the website structure information to bring the exact desired data. The previously suggested wrapper based web crawler can collect HTML documents and the hierarchical structure of the target documents were only 2-3 layers. However, the Korean shopping malls in the study consist of not only HTML documents but also of various web language (JavaScript, Flash, and AJAX), and have a 5-layer hierarchical structure. A web crawler should have information about the review pages in order to visit the pages without visiting any non-review pages. The proposed wrapper contains the location information of review pages. We also propose a language grammar used in describing the location information.

Automatic Patch Information Collection System Using Web Crawler (웹 크롤러를 이용한 자동 패치 정보 수집 시스템)

  • Kim, Yonggun;Na, Sarang;Kim, Hwankuk;Won, Yoojae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.6
    • /
    • pp.1393-1399
    • /
    • 2018
  • Companies that use a variety of software use patch management systems provided by security vendor to manage security vulnerabilities of software to improve security. System administrators monitor the vendor sites that provide new patch information to maintain the latest software versions, but it takes a lot of cost and monitoring time to find and collect patch information because the patch cycle is irregular and the structure of web page is different. In order to reduce this, studies to automate patch information collection based on keyword or web service have been conducted, but since the structure to provide patch information in vendor site is not standardized, it was applicable only to specific vendor site. In this paper, we propose a system that automates the collection of patch information by analyzing the structure and characteristics of the vendor site providing patch information and using web crawler to reduce the cost and monitoring time consumed in collecting patch information.

Design and Implementation of Event-driven Real-time Web Crawler to Maintain Reliability (신뢰성 유지를 위한 이벤트 기반 실시간 웹크롤러의 설계 및 구현)

  • Ahn, Yong-Hak
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.1-6
    • /
    • 2022
  • Real-time systems using web cralwing data must provide users with data from the same database as remote data. To do this, the web crawler repeatedly sends HTTP(HtypeText Transfer Protocol) requests to the remote server to see if the remote data has changed. This process causes network load on the crawling server and remote server, causing problems such as excessive traffic generation. To solve this problem, in this paper, based on user events, we propose a real-time web crawling technique that can reduce the overload of the network while securing the reliability of maintaining the sameness between the data of the crawling server and data from multiple remote locations. The proposed method performs a crawling process based on an event that requests unit data and list data. The results show that the proposed method can reduce the overhead of network traffic in existing web crawlers and secure data reliability. In the future, research on the convergence of event-based crawling and time-based crawling is required.