• Title/Summary/Keyword: web crawler

Search Result 102, Processing Time 0.022 seconds

User-Centered Information Retrieving Method in Blogs (사용자 중심의 블로그 정보 검색 기법)

  • Kim, Seung-Jong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.9
    • /
    • pp.3458-3464
    • /
    • 2010
  • Due to the recent tremendous growth of internet information, RSS, syndication technology provides internet users with a user-friendly information search. RSS enables you to automatically receive newly updated contents, so users do not need to constantly access web sites to obtain new information. This paper proposes the way of managing the web crawler, which collects the sites of RSS documents and helps the users efficiently use the RSS documents. And it also suggests the proper way of ranking the RSS documents based on the users' popularity. Users can efficiently search out the documents they need by using the proposed information searching methods.

Improving the quality of Search engine by using the Intelligent agent technolo

  • Nauyen, Ha-Nam;Choi, Gyoo-Seok;Park, Jong-Jin;Chi, Sung-Do
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.1093-1102
    • /
    • 2003
  • The dynamic nature of the World Wide Web challenges Search engines to find relevant and recent pages. Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Web in a reasonable amount of time. In this paper we study way spiders that should visit the URLs in order to obtain more “important” pages first. We define and apply several metrics, ranking formula for improving crawling results. The comparison between our result and Breadth-first Search (BFS) method shows the efficiency of our experiment system.

  • PDF

A Method of Link Extraction on Non-standard Links in Web Crawling (웹크롤러의 비표준 링크에 관한 링크 추출 방안)

  • Jeong, Jun-Yeong;Jang, Mun-Su;Gang, Seon-Mi
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.79-82
    • /
    • 2008
  • 웹크롤러는 웹페이지 내의 URL링크를 추적하여 다른 문서를 수집한다. 국내의 상당수 웹사이트는 웹 표준에 맞지 않는 링크방식으로 웹문서를 연결하고 있다. 일반적인 웹크롤러는 링크의 비표준적인 사용을 가정하지 않기 때문에 이러한 문서는 수집할 수 없다. 비표준적인 링크가 가능한 것은 사용자의 실수에 강인한 마크업 언어인 HTML에 자바스크립트 기능이 추가되면서 자바스크립트의 변칙적인 사용이 허용되었기 때문이다. 본 논문에서는 230여개의 웹사이트를 조사하여 기존 웹크롤러에서 해결하지 못한 링크 추출 문제를 찾아내고, 이를 수집하기 위한 알고리즘을 제안한다. 또한 자바스크립트 문제 해결을 위한 무거운 자바스크립트 엔진을 대신하여 필요한 기능만으로 구성된 모듈을 사용함으로써 효율적인 문서 수집기 모델을 제안한다.

  • PDF

Efficient URL Prioritizing Method for Web Crawlers (웹 크롤러를 위한 효율적인 URL 우선순위 할당 기법)

  • Md. Hijbul Alam;Jong-Woo Ha;Yoon-Ho Cho;SangKeun Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.383-385
    • /
    • 2008
  • With the amazing growth of web faster important page crawlers poses great challenge. In this research we proposed fractional PageRank, a variation of PageRank computed during crawl that can able to prioritize the downloading order. Experimental results shows that it outperforms the prior crawler in terms of running time yet provide a well download ordering.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

A Study on Focused Crawling of Web Document for Building of Ontology Instances (온톨로지 인스턴스 구축을 위한 주제 중심 웹문서 수집에 관한 연구)

  • Chang, Moon-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.86-93
    • /
    • 2008
  • The construction of ontology defines as complicated semantic relations needs precise and expert skills. For the well defined ontology in real applications, plenty of information of instances for ontology classes is very critical. In this study, crawling algorithm which extracts the fittest topic from the Web overflowing over by a great number of documents has been focused and developed. Proposed crawling algorithm made a progress to gather documents at high speed by extracting topic-specific Link using URL patterns. And topic fitness of Link block text has been represented by fuzzy sets which will improve a precision of the focused crawler.

Design for Recommended System of Movies using Social Network Keyword of Analysis (소셜 네트워크 키워드 분석을 통한 영화 추천 시스템 설계)

  • Yang, Xi-tong;Lee, Jong-Won;Chu, Xun;Pyoun, Do-Kil;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.609-611
    • /
    • 2014
  • Was developed of the web service in Due to the dissemination for IT skills development and smart appliances. In particular, Social network service for should be able to communicate feel free to a user across without distinguishing between production and consumption information in contrast to the existing web service. And strengthen to the information sharing relationships between existing human relation and new human relation. In this paper, a social network service in providing a social networking from users using their communication and information sharing is used to collect and analyze in the keyword. And a design of recommended system of movies for appropriate keyword.

  • PDF

HTML Text Extraction Using Tag Path and Text Appearance Frequency (태그 경로 및 텍스트 출현 빈도를 이용한 HTML 본문 추출)

  • Kim, Jin-Hwan;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1709-1715
    • /
    • 2021
  • In order to accurately extract the necessary text from the web page, the method of specifying the tag and style attributes where the main contents exist to the web crawler has a problem in that the logic for extracting the main contents. This method needs to be modified whenever the web page configuration is changed. In order to solve this problem, the method of extracting the text by analyzing the frequency of appearance of the text proposed in the previous study had a limitation in that the performance deviation was large depending on the collection channel of the web page. Therefore, in this paper, we proposed a method of extracting texts with high accuracy from various collection channels by analyzing not only the frequency of appearance of text but also parent tag paths of text nodes extracted from the DOM tree of web pages.

Deep Learning Frameworks for Cervical Mobilization Based on Website Images

  • Choi, Wansuk;Heo, Seoyoon
    • Journal of International Academy of Physical Therapy Research
    • /
    • v.12 no.1
    • /
    • pp.2261-2266
    • /
    • 2021
  • Background: Deep learning related research works on website medical images have been actively conducted in the field of health care, however, articles related to the musculoskeletal system have been introduced insufficiently, deep learning-based studies on classifying orthopedic manual therapy images would also just be entered. Objectives: To create a deep learning model that categorizes cervical mobilization images and establish a web application to find out its clinical utility. Design: Research and development. Methods: Three types of cervical mobilization images (central posteroanterior (CPA) mobilization, unilateral posteroanterior (UPA) mobilization, and anteroposterior (AP) mobilization) were obtained using functions of 'Download All Images' and a web crawler. Unnecessary images were filtered from 'Auslogics Duplicate File Finder' to obtain the final 144 data (CPA=62, UPA=46, AP=36). Training classified into 3 classes was conducted in Teachable Machine. The next procedures, the trained model source was uploaded to the web application cloud integrated development environment (https://ide.goorm.io/) and the frame was built. The trained model was tested in three environments: Teachable Machine File Upload (TMFU), Teachable Machine Webcam (TMW), and Web Service webcam (WSW). Results: In three environments (TMFU, TMW, WSW), the accuracy of CPA mobilization images was 81-96%. The accuracy of the UPA mobilization image was 43~94%, and the accuracy deviation was greater than that of CPA. The accuracy of the AP mobilization image was 65-75%, and the deviation was not large compared to the other groups. In the three environments, the average accuracy of CPA was 92%, and the accuracy of UPA and AP was similar up to 70%. Conclusion: This study suggests that training of images of orthopedic manual therapy using machine learning open software is possible, and that web applications made using this training model can be used clinically.

A proposal on a proactive crawling approach with analysis of state-of-the-art web crawling algorithms (최신 웹 크롤링 알고리즘 분석 및 선제적인 크롤링 기법 제안)

  • Na, Chul-Won;On, Byung-Won
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 2019
  • Today, with the spread of smartphones and the development of social networking services, structured and unstructured big data have stored exponentially. If we analyze them well, we will get useful information to be able to predict data for the future. Large amounts of data need to be collected first in order to analyze big data. The web is repository where these data are most stored. However, because the data size is large, there are also many data that have information that is not needed as much as there are data that have useful information. This has made it important to collect data efficiently, where data with unnecessary information is filtered and only collected data with useful information. Web crawlers cannot download all pages due to some constraints such as network bandwidth, operational time, and data storage. This is why we should avoid visiting many pages that are not relevant to what we want and download only important pages as soon as possible. This paper seeks to help resolve the above issues. First, We introduce basic web-crawling algorithms. For each algorithm, the time-complexity and pros and cons are described, and compared and analyzed. Next, we introduce the state-of-the-art web crawling algorithms that have improved the shortcomings of the basic web crawling algorithms. In addition, recent research trends show that the web crawling algorithms with special purposes such as collecting sentiment words are actively studied. We will one of the introduce Sentiment-aware web crawling techniques that is a proactive web crawling technique as a study of web crawling algorithms with special purpose. The result showed that the larger the data are, the higher the performance is and the more space is saved.