• Title/Summary/Keyword: Web Search Rank

Search Result 43, Processing Time 0.026 seconds

Semantic Web based Information Retrieval System for the automatic integration framework (자동화된 통합 프레임워크를 위한 시맨틱 웹 기반의 정보 검색 시스템)

  • Choi Ok-Kyung;Han Sang-Yong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.129-136
    • /
    • 2006
  • Information Retrieval System aims towards providing fast and accurate information to users. However, current search systems are based on plain svntactic analysis which makes it difficult for the user to find the exact required information. This paper proposes the SW-IRS (Semantic Web-based Information Retrieval System) using an Ontology Server. The proposed system is purposed to maximize efficiency and accuracy of information retrieval of unstructured and semi-structured documents by using an agent-based automatic classification technology and semantic web based information retrieval methods. For interoperability and easy integration, RDF based repository system is supported, and the newly developed ranking algorithm was applied to rank search results and provide more accurate and reliable information. Finally, a new ranking algorithm is suggested to be used to evaluate performance and verify the efficiency and accuracy of the proposed retrieval system.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Sorting Instagram Hashtags all the Way throw Mass Tagging using HITS Algorithm

  • D.Vishnu Vardhan;Dr.CH.Aparna
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.93-98
    • /
    • 2023
  • Instagram is one of the fastest-growing online photo social web services where users share their life images and videos with other users. Image tagging is an essential step for developing Automatic Image Annotation (AIA) methods that are based on the learning by example paradigm. Hashtags can be used on just about any social media platform, but they're most popular on Twitter and Instagram. Using hashtags is essentially a way to group together conversations or content around a certain topic, making it easy for people to find content that interests them. Practically on average, 20% of the Instagram hashtags are related to the actual visual content of the image they accompany, i.e., they are descriptive hashtags, while there are many irrelevant hashtags, i.e., stophashtags, that are used across totally different images just for gathering clicks and for search ability enhancement. Hence in this work, Sorting instagram hashtags all the way through mass tagging using HITS (Hyperlink-Induced Topic Search) algorithm is presented. The hashtags can sorted to several groups according to Jensen-Shannon divergence between any two hashtags. This approach provides an effective and consistent way for finding pairs of Instagram images and hashtags, which lead to representative and noise-free training sets for content-based image retrieval. The HITS algorithm is first used to rank the annotators in terms of their effectiveness in the crowd tagging task and then to identify the right hashtags per image.

Blog Search Method using User Relevance Feedback and Guru Estimation (사용자 적합성 피드백과 구루 평가 점수를 고려한 블로그 검색 방법)

  • Jeong, Kyung-Seok;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.487-492
    • /
    • 2008
  • Most Web search engines use ranking methods that take both the relevancy and the importance of documents into consideration. The importance of a document denotes the degree of usefulness of the document to general users. One of the most successful methods for estimating the importance of a document has been Page-Rank algorithm which uses the hyperlink structure of the Web for the estimation. In this paper, we propose a new importance estimation algorithm for the blog environment. The proposed method, first, calculates the importance of each document using user's bookmark and click count. Then, the Guru point of a blogger is computed as the sum of all importance points of documents which he/she wrote. Finally, the guru points are reflected in document ranking again. Our experiments show that the proposed method has higher correlation coefficient than the traditional methods with respect to correct answers.

Personalized Document Snippet Extraction Method using Fuzzy Association and Pseudo Relevance Feedback (의사연관 피드백과 퍼지 연관을 이용한 개인화 문서 스니핏 추출 방법)

  • Park, Seon;Jo, Gwang-Mun;Yang, Hu-Yeol;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.137-142
    • /
    • 2012
  • Snippet is a summaries information of representing web pages which search engine provides user. Snippet and page rank in search engine abundantly influence user for visiting web pages. User sometime visits the wrong page with respect to user intention when uses snippet. The snippet extraction method is difficult to accurate comprehending user intention. In order to solve above problem, this paper proposes a new snippet extraction method using fuzzy association and pseudo relevance feedback. The proposed method uses pseudo relevance feedback to expand the use's query. It uses the fuzzy association between the expanded query and the web pages to extract snippet to be well reflected semantic user's intention. The experimental results demonstrate that the proposed method can achieve better snippet extraction performance than the other methods.

Analyzing the Main Paths and Intellectual Structure of the Data Literacy Research Domain (데이터 리터러시 연구 분야의 주경로와 지적구조 분석)

  • Jae Yun Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.4
    • /
    • pp.403-428
    • /
    • 2023
  • This study investigates the development path and intellectual structure of data literacy research, aiming to identify emerging topics in the field. A comprehensive search for data literacy-related articles on the Web of Science reveals that the field is primarily concentrated in Education & Educational Research and Information Science & Library Science, accounting for nearly 60% of the total. Citation network analysis, employing the PageRank algorithm, identifies key papers with high citation impact across various topics. To accurately trace the development path of data literacy research, an enhanced PageRank main path algorithm is developed, which overcomes the limitations of existing methods confined to the Education & Educational Research field. Keyword bibliographic coupling analysis is employed to unravel the intellectual structure of data literacy research. Utilizing the PNNC algorithm, the detailed structure and clusters of the derived keyword bibliographic coupling network are revealed, including two large clusters, one with two smaller clusters and the other with five smaller clusters. The growth index and mean publishing year of each keyword and cluster are measured to pinpoint emerging topics. The analysis highlights the emergence of critical data literacy for social justice in higher education amidst the ongoing pandemic and the rise of AI chatbots. The enhanced PageRank main path algorithm, developed in this study, demonstrates its effectiveness in identifying parallel research streams developing across different fields.

Snippet Extraction Method using Fuzzy Implication Operator and Relevance Feedback (연관 피드백과 퍼지 함의 연산자를 이용한 스니핏 추출 방법)

  • Park, Sun;Shim, Chun-Sik;Lee, Seong-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.3
    • /
    • pp.424-431
    • /
    • 2012
  • In information retrieval, search engine provide the rank of web page and the summary of the web page information to user. Snippet is a summaries information of representing web pages. Visiting the web page by the user is affected by the snippet. User sometime visits the wrong page with respect to user intention when uses snippet. The snippet extraction method is difficult to accurate comprehending user intention. In order to solve above problem, this paper proposes a new snippet extraction method using fuzzy implication operator and relevance feedback. The proposed method uses relevance feedback to expand the use's query. The method uses the fuzzy implication operator between the expanded query and the web pages to extract snippet to be well reflected semantic user's intention. The experimental results demonstrate that the proposed method can achieve better snippet extraction performance than the other methods.

A Study of Perception of Golfwear Using Big Data Analysis (빅데이터를 활용한 골프웨어에 관한 인식 연구)

  • Lee, Areum;Lee, Jin Hwa
    • Fashion & Textile Research Journal
    • /
    • v.20 no.5
    • /
    • pp.533-547
    • /
    • 2018
  • The objective of this study is to examine the perception of golfwear and related trends based on major keywords and associated words related to golfwear utilizing big data. For this study, the data was collected from blogs, Jisikin and Tips, news articles, and web $caf{\acute{e}}$ from two of the most commonly used search engines (Naver & Daum) containing the keywords, 'Golfwear' and 'Golf clothes'. For data collection, frequency and matrix data were extracted through Textom, from January 1, 2016 to December 31, 2017. From the matrix created by Textom, Degree centrality, Closeness centrality, Betweenness centrality, and Eigenvector centrality were calculated and analyzed by utilizing Netminer 4.0. As a result of analysis, it was found that the keyword 'brand' showed the highest rank in web visibility followed by 'woman', 'size', 'man', 'fashion', 'sports', 'price', 'store', 'discount', 'equipment' in the top 10 frequency rankings. For centrality calculations, only the top 30 keywords were included because the density was extremely high due to high frequency of the co-occurring keywords. The results of centrality calculations showed that the keywords on top of the rankings were similar to the frequency of the raw data. When the frequency was adjusted by subtracting 100 and 500 words, it showed different results as the low-ranking keywords such as J. Lindberg in the frequency analysis ranked high along with changes in the rankings of all centrality calculations. Such findings of this study will provide basis for marketing strategies and ways to increase awareness and web visibility for Golfwear brands.

Review of Research Trends on Landslide Hazards (산사태 재해 관련 학술동향 분석)

  • Kim, J.H.;Kim, W.Y.
    • The Journal of Engineering Geology
    • /
    • v.23 no.3
    • /
    • pp.305-314
    • /
    • 2013
  • Recent international and national research trends in landslide hazards were analyzed by performing a literature search of relevant scientific journals. For obtaining data from Korea, we used 'Information for Environmental Geology' (IEG), which covers 17 journals in the field of environmental geology. A total of 54 articles related to landslide hazards were found in 5 journals published in the period 2000-2012. The most common topic was landslide prediction or susceptibility (29 articles), followed by landslide mechanisms. For international information, we analyzed 1,851 articles from the 'Web Of Science' published from 2003 to the present. Researchers in Italy have published the greatest number of papers in this field, while papers from Korea rank first in terms of the citation index.

Design and Implementation of Quality Broker Architecture to Web Service Selection based on Autonomic Feedback (자율적 피드백 기반 웹 서비스 선정을 위한 품질 브로커 아키텍처의 설계 및 구현)

  • Seo, Young-Jun;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.223-234
    • /
    • 2008
  • Recently the web service area provides the efficient integrated environment of the internal and external of corporation and enterprise that wants the introduction of it is increasing. Also the web service develops and the new business model appears, the domestic enterprise environment and e-business environment are changing caused by web service. The web service which provides the similar function increases, most the method which searches the suitable service in demand of the user is more considered seriously. When it needs to choose one among the similar web services, service consumer generally needs quality information of web service. The problem, however, is that the advertised QoS information of a web service is not always trustworthy. A service provider may publish inaccurate QoS information to attract more customers, or the published QoS information may be out of date. Allowing current customers to rate the QoS they receive from a web service, and making these ratings public, can provide new customers with valuable information on how to rank services. This paper suggests the agent-based quality broker architecture which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. It is able to solve problem which modify quality requirements of the consumer from providing the architecture it selects a web service to consumer dynamically. Namely, the consumer is able to search the service which provides the optimal quality criteria through UDDI browser which is connected in quality broker server. To quality criteria value decision of each service the user intervention is excluded the maximum. In the existing selection architecture, the objective evaluation was difficult in subjective class of service selecting of the consumer. But the proposal architecture is able to secure an objectivity with the quality criteria value decision where the agent monitors binding information in consumer location. Namely, it solves QoS information of service which provider does not provide with QoS information sharing which is caused by with feedback of consumer side agents.