• Title/Summary/Keyword: Focused Crawler

Search Result 13, Processing Time 0.026 seconds

Focused Crawler using Ontology and Sentence Analysis (문장 분석 및 온톨로지를 이용한 Focused Crawler)

  • 최광복;김현주;강진범;홍광희;양재영;최중민
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.100-102
    • /
    • 2004
  • 월드 와이드 웹의 보편화로 인하여 급속하게 증가하고 변화하는 웹 문서는 검색엔진으로 하여금 색인된 웹 문서와 현재의 웹 문서의 일관성을 유지할 수 없을 정도이다. 이러한 문제를 해결하기 위한 방법으로 연구되고 있는 것이 특정한 주제를 정하고 정해진 주제에 관련된 문서를 수집할 수 있는 focused crawler가 제시되고 있다. 지금까지 다양한 접근방법의 focused crawler가 개발되었지만, 모두 웹 링크를 이용하여 연결되어 있는 문서를 평가하는 처리과정을 거치고 있다. 그러나 이러한 과정은 다양한 내용을 포함하고 있는 문서일 경우 관련내용이 존재함에도 문서가 버려지거나 사용되더라도 문서상의 모든 링크를 사용하여 처리하는 비효율적인 문제점이 발생한다. 이 논문에서는 웰 문서 내부에 포함되어 있는 정보를 온톨로지를 이용하여 평가함으로써 다양한 내용을 가진 문서에서 사용자가 원하는 정보를 찾을 수 있을 뿐만 아니라 정보와 관련된 링크만을 사용하여 보다 효율적이고 정확한 문서를 수집하고자 한다.

  • PDF

A Focused Crawler by Segmentation of Context Information (주변정보 분할을 이용한 주제 중심 웹 문서 수집기)

  • Cho, Chang-Hee;Lee, Nam-Yong;Kang, Jin-Bum;Yang, Jae-Young;Choi, Joong-Min
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.697-702
    • /
    • 2005
  • The focused crawler is a topic-driven document-collecting crawler that was suggested as a promising alternative of maintaining up-to-date web document Indices in search engines. A major problem inherent in previous focused crawlers is the liability of missing highly relevant documents that are linked from off-topic documents. This problem mainly originated from the lack of consideration of structural information in a document. Traditional weighting method such as TFIDF employed in document classification can lead to this problem. In order to improve the performance of focused crawlers, this paper proposes a scheme of locality-based document segmentation to determine the relevance of a document to a specific topic. We segment a document into a set of sub-documents using contextual features around the hyperlinks. This information is used to determine whether the crawler would fetch the documents that are linked from hyperlinks in an off-topic document.

A Web Crawler using Hyperlink Structure and Hypertext Categorization Method (Hyperlink구조와 Hypertext 분류방법을 이용한 Web Crawler)

  • Lee, Dong-Won;Hyun, Soon-J.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04b
    • /
    • pp.1337-1340
    • /
    • 2002
  • 웹 정보검색에서 웹 문서를 수집하고, 색인을 구축하는 작업에서 Web Crawler 의 역할은 매우 중요하다. 그러나, 웹 문서의 급속한 증가로 인하여 Web Crawler 가 모든 웹 문서를 수집하는 것은 불가능하며, 웹 정보검색의 정확성을 증가시키기 위한 방법으로 특정한 영역의 문서를 수집하는 focused web crawler에 대한 연구가 활발히 진행되어 왔다. 이와 함께, 웹 문서의 link구조를 이용하여 문서의 집합에서 중요한 문서를 찾는 연구들이 많이 진행되었다. 그러나, 기존의 연구에서는 문서의 link 구조에만 초점이 맞추어져 있으며, hypertext 전체의 연결 구조를 알아야 한다는 문제점이 있다. 본 연구에서는 hyperlink의 구조와 hypertext 분류방법을 이용하여 문서에 연결된 다른 문서 중 중요한 문서를 결정하는 방법을 제시하고 이를 이용한 web crawler 를 통하여 특정영역에서 정확한 문서를 수집함을 보였다.

  • PDF

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Dynamic Model Development and Simulation of Crawler Type Excavator (크롤러형 굴삭기의 동역학적 모델 개발 및 시뮬레이션)

  • Kwon, Soon-Ki
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.18 no.6
    • /
    • pp.642-651
    • /
    • 2009
  • The history of excavator design is not long enough which still causes most of the design considerations to be focused on static analysis or simple functional improvement based on static analysis. However, the real forces experiencing on each component of excavator are highly transient and impulsive. Therefore, the prediction and the evaluation of the movement of the excavator by dynamic load in the early design stage through the dynamic transient analysis of the excavator and ensuring of design technique plays an importance role to reduce development-cost, shorten product-deliver, decrease vehicle-weight and optimize the system design. In this paper, Commercial software DADS and ANSYS help to develop the track model of the crawler type excavator, and to evaluate the performance and the dynamic characteristics of excavator with various simulations. For that reason, the track of crawler type excavator is modelled with DADS Track Vehicle Superelement, and the reaction forces on the track rollers were predicted through the driving simulation. Also, the upper frame and cabin vibration characteristics, at the low RPM idle state, were evaluated with engine rigid body modelling. And flexibility body effects were considered to determine the more accurate joint reaction forces and accelerations under the upper frame swing motion.

  • PDF

RSS Channel Recommendation System using Focused Crawler (주제 중심 수집기를 이용한 RSS 채널 추천 시스템)

  • Lee, Young-Seok;Cho, Jung-Woo;Kim, Jun-Il;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.52-59
    • /
    • 2006
  • Recently, the internet has seen tremendous growth with plenty of enriched information due to an increasing number of specialized personal interests and popularizations of private cyber space called, blog. Many of today's blog provide internet users, RSS, which is also hewn as the syndication technology. It enables blog users to receive update automatically by registering their RSS channel address with RSS aggregator. In other words, it keeps internet users wasting their time checking back the web site for update. This paper propose the ways to manage RSS Channel Searching Crawler and collected RSS Channels for internet users to search for a specific RSS channel of their want without any obstacles. At the same time. This paper proposes RSS channel ranking based on user popularity. So, we focus on an idea of adding index to information and web update for users to receive appropriate information according to user property.

A Study on Focused Crawling of Web Document for Building of Ontology Instances (온톨로지 인스턴스 구축을 위한 주제 중심 웹문서 수집에 관한 연구)

  • Chang, Moon-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.86-93
    • /
    • 2008
  • The construction of ontology defines as complicated semantic relations needs precise and expert skills. For the well defined ontology in real applications, plenty of information of instances for ontology classes is very critical. In this study, crawling algorithm which extracts the fittest topic from the Web overflowing over by a great number of documents has been focused and developed. Proposed crawling algorithm made a progress to gather documents at high speed by extracting topic-specific Link using URL patterns. And topic fitness of Link block text has been represented by fuzzy sets which will improve a precision of the focused crawler.

Crawling algorithm design and experiment for automatic deep web document collection (심층 웹 문서 자동 수집을 위한 크롤링 알고리즘 설계 및 실험)

  • Yun-Jeong, Kang;Min-Hye, Lee;Dong-Hyun, Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • Deep web collection means entering a query in a search form and collecting response results. It is estimated that the information possessed by the deep web has about 450 to 550 times more information than the statically constructed surface web. The static method does not show the changed information until the web page is refreshed, but the dynamic web page method updates the necessary information in real time and provides real-time information without reloading the web page, but crawler has difficulty accessing the updated information. Therefore, there is a need for a way to automatically collect information on these deep webs using a crawler. Therefore, this paper proposes a method of utilizing scripts as general links, and for this purpose, an algorithm that can utilize client scripts like regular URLs is proposed and experimented. The proposed algorithm focused on collecting web information by menu navigation and script execution instead of the usual method of entering data into search forms.

Differences across countries in the impact of developers' collaboration characteristics on performance : Focused on weak tie theory (국가별 오픈소스 소프트웨어 개발자의 네트워크 특성이 개방형 협업 성과에 미치는 영향 : 약한 연결 이론을 중심으로)

  • Lee, Saerom;Baek, Hyunmi;Lee, Uijun
    • The Journal of Information Systems
    • /
    • v.29 no.2
    • /
    • pp.149-171
    • /
    • 2020
  • Purpose With the advent of the 4th Industrial Revolution, related technologies such as IoT, big data, and artificial intelligence technologies are developing through not only specific companies but also a number of unspecified developers called open collaboration. For this reason, it is important to understand the nature of the collaboration that leads to successful open collaboration. Design/methodology/approach We focused the relationship between the collaboration characteristics and collaboration performance of developers who participating in open source software development, which is a representative open collaboration. Specifically, we create the country-specific network and draw the individual developers characteristics from the network such as collaboration scope and collaboration intensity. We compare and analyze the characteristics of developers across countries and explore whether there are differences between indicators. We develop a Web crawler for GitHub, a representative OSSD development site, and collected data of developers who located at China, Japan, Korea, the United States, and Canada. Findings China showed the characteristics of cooperation suitable for the form of weak tie theory, and consistent results were not drawn from other countries. This study confirmed the necessity of exploratory research on collaboration characteristics by country considering that there are differences in open collaboration characteristics or software development environments by country.

Web Crawling and PageRank Calculation for Community-Limited Search (커뮤니티 제한 검색을 위한 웹 크롤링 및 PageRank 계산)

  • Kim Gye-Jeong;Kim Min-Soo;Kim Yi-Reun;Whang Kyu-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.1-3
    • /
    • 2005
  • 최근 웹 검색 분야에서는 검색 질을 높이기 위한 기법들이 많이 연구되어 왔으며, 대표적인 연구로는 제한 검색, focused crawling, 웹 클러스터링 등이 있다. 그러나 제한 검색은 검색 범위를 의미적으로 관련된 사이트들로 제한할 수 없으며, focused crawling은 질의 시점에 클러스터링하기 때문에 질의 처리 시간이 오래 걸리고, 웹 클러스터링은 많은 웹 페이지들을 대상으로 클러스터링하기 위한 오버헤드가 크다. 본 논문에서는 검색 범위를 특정 커뮤니티로 제한하여 검색 하는 커뮤니티 제한 검색과 커뮤니티를 구하는 방법으로 cluster crawler를 제안하여 이러한 문제점을 해결한다. 또한, 커뮤니티를 이용하여 PageRank를 2단계로 계산하는 방법을 제안한다. 제안된 방법은 첫 번째 과정에서 커뮤니티 단위로 지역적으로 PageRank를 계산한 후, 두 번째 과정에서 이를 바탕으로 전역적으로 PageRank론 계산한다. 제안된 방법은 Wang에 의해 제안된 방법에 비해 PageRank 근사치의 오차를 $59\%$ 정도로 줄일 수 있다.

  • PDF