• Title/Summary/Keyword: hyperlinks

Search Result 62, Processing Time 0.022 seconds

The Seamless Browsing: Enhancing the users' speed of hyperlink navigation with zooming and thumbnail methods (줌과 하이퍼링크 미리 보기에 기반한 웹 탐색 성능 향상 -IPTV 환경에서 새로운 웹 탐색 방법에 관한 연구)

  • Yoo, Byung-In;Lea, Jong-Ho;Kim, Yeun-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.326-331
    • /
    • 2008
  • We present the seamless browsing - a new Zooming technique based on the zoom ratios and the distances of the hyperlinks from the pointer. In most cases, users have to activate the interesting link after guessing the target content of it, just based on the insufficient information from the link label or title. We propose that a web browser displays a limited number of hyperlinks in an area around the pointer in a distinguished way, e.g. in different sizes of thumbnails, transparency or style. If a user zooms in on the pointer area, to new web browser displays the varying images of hyperlink-targets according to the zoom ratio, and finally it transfers to the new target page which was nearest to the pointer position. This method allows users to easily select a hyperlink based on rich information given by zoomable thumbnails and seamlessly to transit through web pages just with zooming. 1n our experiments, results show that the seamless browsing significantly outperforms the legacy way.

  • PDF

A Focused Crawler by Segmentation of Context Information (주변정보 분할을 이용한 주제 중심 웹 문서 수집기)

  • Cho, Chang-Hee;Lee, Nam-Yong;Kang, Jin-Bum;Yang, Jae-Young;Choi, Joong-Min
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.697-702
    • /
    • 2005
  • The focused crawler is a topic-driven document-collecting crawler that was suggested as a promising alternative of maintaining up-to-date web document Indices in search engines. A major problem inherent in previous focused crawlers is the liability of missing highly relevant documents that are linked from off-topic documents. This problem mainly originated from the lack of consideration of structural information in a document. Traditional weighting method such as TFIDF employed in document classification can lead to this problem. In order to improve the performance of focused crawlers, this paper proposes a scheme of locality-based document segmentation to determine the relevance of a document to a specific topic. We segment a document into a set of sub-documents using contextual features around the hyperlinks. This information is used to determine whether the crawler would fetch the documents that are linked from hyperlinks in an off-topic document.

Combining Multiple Sources of Evidence to Enhance Web Search Performance

  • Yang, Kiduk
    • Journal of Korean Library and Information Science Society
    • /
    • v.45 no.3
    • /
    • pp.5-36
    • /
    • 2014
  • The Web is rich with various sources of information that go beyond the contents of documents, such as hyperlinks and manually classified directories of Web documents such as Yahoo. This research extends past fusion IR studies, which have repeatedly shown that combining multiple sources of evidence (i.e. fusion) can improve retrieval performance, by investigating the effects of combining three distinct retrieval approaches for Web IR: the text-based approach that leverages document texts, the link-based approach that leverages hyperlinks, and the classification-based approach that leverages Yahoo categories. Retrieval results of text-, link-, and classification-based methods were combined using variations of the linear combination formula to produce fusion results, which were compared to individual retrieval results using traditional retrieval evaluation metrics. Fusion results were also examined to ascertain the significance of overlap (i.e. the number of systems that retrieve a document) in fusion. The analysis of results suggests that the solution spaces of text-, link-, and classification-based retrieval methods are diverse enough for fusion to be beneficial while revealing important characteristics of the fusion environment, such as effects of system parameters and relationship between overlap, document ranking and relevance.

City Networks of Korea: An Internet Hyperlinks Interpretation (인터넷 하이퍼링크로 본 도시 네트워크)

  • 허우긍
    • Journal of the Korean Geographical Society
    • /
    • v.38 no.4
    • /
    • pp.518-534
    • /
    • 2003
  • A number of previous studies have maintained that information technologies, due to their ability to overcome distance, can nurturing an innovative class of polycentric urban configurations, i.e., network cities. The present study intends to clarify whether any network relationship has recently been emerged among Korean cities by the advancement of information technology. The analyses focused on the geography of Korean national domains (.kr domains), and the hyperlink associations among three major types of domains, namely commercial, academic, and organizational domains. The study findings altogether indicate that the advancement of global economy and information era appears to be enhancing, rather than reducing the status of primate city. Seoul dominates the entire nation, forming an enclave in the production and consumption of information. Only the domains of educational institutes show network-like relations among local centers to a certain extent. The paper concludes with a discussion on the implications of the findings for future research and ‘spatial’ policy measures.

The Similarities and Differences between the Hyperlinking Practice of Scholarly E-journal Authors and the Traditional Citing Practice (전자 학술지 안에서 하이퍼링크 행위와 전통적인 인용 행위 사이의 유사점과 차이점들에 대한 비교 분석)

  • Kim Hak-Joon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.33 no.4
    • /
    • pp.47-63
    • /
    • 1999
  • The purpose of this study was to Identify the similarities and differences between the hyperlinking practice of scholarly e-journal authors and conventional citing practice. 230 scholarly e-journal articles containing at least two hyperlinks and their authors were selected as the sample for the study. A mail questionnaire survey of the authors of the sampled e-journal articles was conducted (with a response rate of $70\%$) to collect quantitative data on the authors' hyperlinking motivations. In addition, a content analysis of the e-journal articles and the source documents hyperlinked in the articles was conducted to examine the patterns of hyperlinks. A comparison between the quantitative results of this hyperlinking study and previous citation studies was made. The results revealed not only some similarities but also several significant differences between them.

  • PDF

Analysis of a Security Survey for Smartphones

  • Nam, Sang-Zo
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.14-23
    • /
    • 2015
  • This paper presents the findings of a study in which students at a four-year university were surveyed in an effort to analyze and verify the differences in perceived security awareness, security-related activities, and security damage experiences when using smartphones, based on demographic variables such as gender, academic year, and college major. Moreover, the perceived security awareness items and security-related activities were tested to verify whether they affect the students' security damage experience. Based on survey data obtained from 592 participants, the findings indicate that demographic differences exist for some of the survey question items. The majority of the male students replied "affirmative" to some of the questions related to perceived security awareness and "enthusiastic" to questions about security-related activities. Some academic year differences exist in the responses to perceived security awareness and security-related activities. On the whole, freshmen had the lowest level of security awareness. Security alert seems to be very high in sophomores, but it decreases as the students become older. While the difference in perceived security awareness based on college major was not significant, the difference in some security-related activities based on that variable was significant. No significant difference was found in some items such as storing private information in smartphones and frequency of implementation of security applications based on the college major variable. However, differences among the college majors were verified in clicking hyperlinks in unknown SMS messages and in the number of security applications in smartphones. No differences were found in security damage experiences based on gender, academic year, and college major. Security awareness items had no impact on the experience of security damage in smartphones. However, some security activities, such as storing resident registration numbers in a smartphone, clicking hyperlinks in unknown SMS messages, the number of security apps in a smartphone, and the frequency of implementation of security apps did have an impact on security damage.

Design and Implementation of a WebEditor Specialized for Web-Site Maintenance (유지보수에 특화된 웹 문서 작성기의 설계 및 구현)

  • Cho, Young-Suk;Kwon, Yong-Ho;Do, Jae-Su
    • Convergence Security Journal
    • /
    • v.7 no.4
    • /
    • pp.73-81
    • /
    • 2007
  • Users of World Wide Web (Web) experience difficulties in the retrieval of pertinent information due to the increased information provided by Web sites and the complex structure of Web documents that are continuously created, deleted, restructured, and updated. Web providers' efforts to maintain their sites are tend to be less than that of site creation due to the expenses required for maintenance. If information of relationship among Web documents and their validity is provided to Web managers as well as Web developers, they can better serve users. In order to grasp the whole structure of a Web site and to verify the validity of hyperlinks, traversal and analysis of hyperlinks in a Web document are required to provide information for effective and efficient creation and maintenance of the Web. In this paper, we introduce a Web Editor specialized for Web maintenance. We emphasized on two aspects: first, the analysis of HTML Tags to extract hyperlink information and second, establishment of the relationship among hyperlinked documents, and verification of the validity of them.

  • PDF

Comparing Feature Selection Methods in Spam Mail Filtering

  • Kim, Jong-Wan;Kang, Sin-Jae
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.17-20
    • /
    • 2005
  • In this work, we compared several feature selection methods in the field of spam mail filtering. The proposed fuzzy inference method outperforms information gain and chi squared test methods as a feature selection method in terms of error rate. In the case of junk mails, since the mail body has little text information, it provides insufficient hints to distinguish spam mails from legitimate ones. To address this problem, we follow hyperlinks contained in the email body, fetch contents of a remote web page, and extract hints from both original email body and fetched web pages. A two-phase approach is applied to filter spam mails in which definite hint is used first, and then less definite textual information is used. In our experiment, the proposed two-phase method achieved an improvement of recall by 32.4% on the average over the $1^{st}$ phase or the $2^{nd}$ phase only works.

  • PDF

RDF 지식 베이스의 자원 중요도 계산 알고리즘에 대한 연구

  • No, Sang-Gyu;Park, Hyeon-Jeong;Park, Jin-Su
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.05a
    • /
    • pp.123-137
    • /
    • 2007
  • The information space of semantic web comprised of various resources, properties, and relationships is more complex than that of WWW comprised of just documents and hyperlinks. Therefore, ranking methods in the semantic web should be modified to reflect the complexity of the information space. In this paper we propose a method of ranking query results from RDF(Resource Description Framework) knowledge bases. The ranking criterion is the importance of a resource computed based on the link structure of the RDF graph. Our method is expected to solve a few problems in the prior research including the Tightly-Knit Community Effect. We illustrate our methods using examples and discuss directions for future research.

  • PDF

Web Image Clustering with Text Features and Measuring its Efficiency

  • Cho, Soo-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.699-706
    • /
    • 2007
  • This article is an approach to improving the clustering of Web images by using high-level semantic features from text information relevant to Web images as well as low-level visual features of image itself. These high-level text features can be obtained from image URLs and file names, page titles, hyperlinks, and surrounding text. As a clustering algorithm, a self-organizing map (SOM) proposed by Kohonen is used. To evaluate the clustering efficiencies of SOMs, we propose a simple but effective measure indicating the accumulativeness of same class images and the perplexities of class distributions. Our approach is to advance the existing measures through defining and using new measures accumulativeness on the most superior clustering node and concentricity to evaluate clustering efficiencies of SOMs. The experimental results show that the high-level text features are more useful in SOM-based Web image clustering.

  • PDF