• Title/Summary/Keyword: Web-Crawling

Search Result 175, Processing Time 0.027 seconds

Implementation of web server monitoring system using crawling technology

  • Yu, Young-Geun;Nam, Ki-Bok;Park, Koo-Rack
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.123-128
    • /
    • 2019
  • In modern society, there are WEB sites that provides information in various fields by many advances in IT technology. As the number of users accessing the Web increases, one server becomes unavailable, and multiple servers are deployed to provide the service. In addition, systems that control servers are required to manage multiple servers. However, server control systems in the existing market are mostly those that notify managers through SMS and apps when a server's down or a controlled port is closed. However, in case of servers that generate a lot of traffic, the Web server and the WAS are operated and managed each independently. The WEB and WAS provide service by connecting to each other. However, the connection between WEB and WAS may be disconnected due to various environmental factors. In this case, the existing control system can not determine whether the service is working properly. Even in the case of WEB and WAS of a server that is operated independently, there is a phenomenon that the existing control system does not know the problem even when the normal service is not provided due to environmental factors such as disconnection to DB. In this paper, we implemented a system to check the normal state of Web service using Web crawling to solve this problem.

A study on the enhanced filtering method of the deduplication for bulk harvest of web records (대규모 웹 기록물의 원격수집을 위한 콘텐츠 중복 필터링 개선 연구)

  • Lee, Yeon-Soo;Nam, Sung-un;Yoon, Dai-hyun
    • The Korean Journal of Archival Studies
    • /
    • no.35
    • /
    • pp.133-160
    • /
    • 2013
  • As the network and electronic devices have been developed rapidly, the influences the web exerts on our daily lives have been increasing. Information created on the web has been playing more and more essential role as the important records which reflect each era. So there is a strong demand to archive information on the web by a standardized method. One of the methods is the snapshot strategy, which is crawling the web contents periodically using automatic software. But there are two problems in this strategy. First, it can harvest the same and duplicate contents and it is also possible that meaningless and useless contents can be crawled due to complex IT skills implemented on the web. In this paper, we will categorize the problems which can emerge when crawling web contents using snapshot strategy and present the possible solutions to settle the problems through the technical aspects by crawling the web contents in the public institutions.

Web Crawler Service Implementation for Information Retrieval based on Big Data Analysis (빅데이터 분석 기반의 정보 검색을 위한 웹 크롤러 서비스 구현)

  • Kim, Hye-Suk;Han, Na;Lim, Suk-Ja
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.933-942
    • /
    • 2017
  • In this paper, we propose a web crawler service method for collecting information efficiently about college students and job-seeker's external activities, competition, and scholarship. The proposed web crawler service uses Jsoup tree analysis and Json format data transmission method to avoid problems of duplicated crawling while crawling at high speed. After collecting relevant information for 24 hours, we were able to confirm that the web crawler service is running with an accuracy of 100%. It is expected that the web crawler service can be applied to various web sites in the future to improve the web crawler service.

Designing and implementing web crawling-based SNS web site (웹 크롤링 기반 SNS웹사이트 설계 및 구현)

  • Yoon, Kyung Seob;Kim, Yeon Hong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.01a
    • /
    • pp.21-24
    • /
    • 2018
  • 기존 Facebook 페이지의 경우에는 수많은 제보 글이 올라와 사용자가 원하는 글을 찾기 어렵다는 문제점이 발생하고 있다. 본 논문에서는 이를 위해 다양한 Facebook 페이지 내용을 크롤링하여 사용자가 원하는 Facebook 페이지 내용을 검색하여 사용자에게 제공할 수 있도록 데이터베이스 서버에 저장 한 후 크롤링 된 Facebook 페이지 내용을 제공할 수 있는 웹사이트를 설계하고 구현한다.

  • PDF

Information-providing Application Based on Web Crawling (웹 크롤링을 통한 개인 맞춤형 정보제공 애플리케이션)

  • Ju-Hyeon Kim;Jeong-Eun Choi;U-Gyeong Shin;Min-Jun Piao;Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.1
    • /
    • pp.21-27
    • /
    • 2024
  • This paper presents the implementation of a personalized real-time information-providing application utilizing filtering and web crawling technologies. The implemented application performs web crawling based on the user-set keywords within web pages, using the Jsoup library as a basis for the selected keywords. The crawled data is then stored in a MySQL database. The stored data is presented to the user through an application implemented using Flutter. Additionally, mobile push notifications are provided using Firebase Cloud Messaging (FCM). Through these methods, users can efficiently obtain the desired information quickly. Furthermore, there is an expectation that this approach can be applied to the Internet of Things (IoT) where big data is generated, allowing users to receive only the information they need.

Design and Implementation of Web Crawler utilizing Unstructured data

  • Tanvir, Ahmed Md.;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.374-385
    • /
    • 2019
  • A Web Crawler is a program, which is commonly used by search engines to find the new brainchild on the internet. The use of crawlers has made the web easier for users. In this paper, we have used unstructured data by structuralization to collect data from the web pages. Our system is able to choose the word near our keyword in more than one document using unstructured way. Neighbor data were collected on the keyword through word2vec. The system goal is filtered at the data acquisition level and for a large taxonomy. The main problem in text taxonomy is how to improve the classification accuracy. In order to improve the accuracy, we propose a new weighting method of TF-IDF. In this paper, we modified TF-algorithm to calculate the accuracy of unstructured data. Finally, our system proposes a competent web pages search crawling algorithm, which is derived from TF-IDF and RL Web search algorithm to enhance the searching efficiency of the relevant information. In this paper, an attempt has been made to research and examine the work nature of crawlers and crawling algorithms in search engines for efficient information retrieval.

Image Classification Model using web crawling and transfer learning (웹 크롤링과 전이학습을 활용한 이미지 분류 모델)

  • Lee, JuHyeok;Kim, Mi Hui
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.639-646
    • /
    • 2022
  • In this paper, to solve the large dataset problem, we collect images through an image collection method called web crawling and build datasets for use in image classification models through a data preprocessing process. We also propose a lightweight model that can automatically classify images by adding category values by incorporating transfer learning into the image classification model and an image classification model that reduces training time and achieves high accuracy.

A Study on Focused Crawling of Web Document for Building of Ontology Instances (온톨로지 인스턴스 구축을 위한 주제 중심 웹문서 수집에 관한 연구)

  • Chang, Moon-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.86-93
    • /
    • 2008
  • The construction of ontology defines as complicated semantic relations needs precise and expert skills. For the well defined ontology in real applications, plenty of information of instances for ontology classes is very critical. In this study, crawling algorithm which extracts the fittest topic from the Web overflowing over by a great number of documents has been focused and developed. Proposed crawling algorithm made a progress to gather documents at high speed by extracting topic-specific Link using URL patterns. And topic fitness of Link block text has been represented by fuzzy sets which will improve a precision of the focused crawler.

Implementation of Efficient Distributed Crawler through Stepwise Crawling Node Allocation

  • Kim, Hyuntae;Byun, Junhyung;Na, Yoseph;Jung, Yuchul
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.15-31
    • /
    • 2020
  • Various websites have been created due to the increased use of the Internet, and the number of documents distributed through these websites has increased proportionally. However, it is not easy to collect newly updated documents rapidly. Web crawling methods have been used to continuously collect and manage new documents, whereas existing crawling systems applying a single node demonstrate limited performances. Furthermore, crawlers applying distribution methods exhibit a problem related to effective node management for crawling. This study proposes an efficient distributed crawler through stepwise crawling node allocation, which identifies websites' properties and establishes crawling policies based on the properties identified to collect a large number of documents from multiple websites. The proposed crawler can calculate the number of documents included in a website, compare data collection time and the amount of data collected based on the number of nodes allocated to a specific website by repeatedly visiting the website, and automatically allocate the optimal number of nodes to each website for crawling. An experiment is conducted where the proposed and single-node methods are applied to 12 different websites; the experimental result indicates that the proposed crawler's data collection time decreased significantly compared with that of a single node crawler. This result is obtained because the proposed crawler applied data collection policies according to websites. Besides, it is confirmed that the work rate of the proposed model increased.

Design and Implemention of Real-time web Crawling distributed monitoring system (실시간 웹 크롤링 분산 모니터링 시스템 설계 및 구현)

  • Kim, Yeong-A;Kim, Gea-Hee;Kim, Hyun-Ju;Kim, Chang-Geun
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.1
    • /
    • pp.45-53
    • /
    • 2019
  • We face problems from excessive information served with websites in this rapidly changing information era. We find little information useful and much useless and spend a lot of time to select information needed. Many websites including search engines use web crawling in order to make data updated. Web crawling is usually used to generate copies of all the pages of visited sites. Search engines index the pages for faster searching. With regard to data collection for wholesale and order information changing in realtime, the keyword-oriented web data collection is not adequate. The alternative for selective collection of web information in realtime has not been suggested. In this paper, we propose a method of collecting information of restricted web sites by using Web crawling distributed monitoring system (R-WCMS) and estimating collection time through detailed analysis of data and storing them in parallel system. Experimental results show that web site information retrieval is applied to the proposed model, reducing the time of 15-17%.