• Title/Summary/Keyword: search engine results pages

Search Result 22, Processing Time 0.021 seconds

An analysis of user behaviors on the search engine results pages based on the demographic characteristics

  • Bitirim, Yiltan;Ertugrul, Duygu Celik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.7
    • /
    • pp.2840-2861
    • /
    • 2020
  • The purpose of this survey-based study is to make an analysis of search engine users' behaviors on the Search Engine Results Pages (SERPs) based on the three demographic characteristics gender, age, and program studying. In this study, a questionnaire was designed with 12 closed-ended questions. Remaining questions other than the demographic characteristic related ones were about "tab", "advertisement", "spelling suggestion", "related query suggestion", "instant search suggestion", "video result", "image result", "pagination" and the amount of clicking results. The questionnaire was used and the data collected were analyzed with the descriptive statistics as well as the inferential statistics. 84.2% of the study population was reached. Some of the major results are as follows: Most of each demographic characteristic category (i.e. female, male, under-20, 20-24, above-24, English computer engineering, Turkish computer engineering, software engineering) have rarely or more click for tab, spelling suggestion, related query suggestion, instant search suggestion, video result, image result, and pagination. More than 50.0% of female category click advertisement rarely; however, for the others, 50.0% or more never click advertisement. For every demographic characteristic category, between 78.0% and 85.4% click 10 or fewer results. This study would be the first attempt with its complete content and design. Search engine providers and researchers would gain knowledge to user behaviors about the usage of the SERPs based on the demographic characteristics.

Webometrics Ranking of Digital Libraries of Iranian Universities of Medical Sciences

  • Dastani, Meisam;Atarodi, Alireza;Panahi, Somayeh
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.8 no.3
    • /
    • pp.41-52
    • /
    • 2018
  • Digital Library websites plays an important role in dissemination of information of the institution and library resources. It acts as a trustworthy mirror of the institute. To evaluate the library website performance webometrics tools and indicators are required. The aim of the present research is study the webometrics of Digital Libraries of Iranian Universities of Medical Sciences on the Web to determine the amount of the visibility a website and web pages. The URL and link of 42 digital library website is obtained directly by visiting the university's website. To extract the number of indexed web pages (size), rich files have used the Google search engine Also, to extract the number of scientific resources retrieved have used the Google Scholar search engine. To calculate and obtain the number of links received have used the MOZ search engine. Generally, the results indicated that the website of Iranian digital libraries did not have a good performance in term of webometric indexes, and none of them were not rated at all indexes, only some of the websites mentioned in one or two indicators.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

Users' Understanding of Search Engine Advertisements

  • Lewandowski, Dirk
    • Journal of Information Science Theory and Practice
    • /
    • v.5 no.4
    • /
    • pp.6-25
    • /
    • 2017
  • In this paper, a large-scale study on users' understanding of search-based advertising is presented. It is based on (1) a survey, (2) a task-based user study, and (3) an online experiment. Data were collected from 1,000 users representative of the German online population. Findings show that users generally lack an understanding of Google's business model and the workings of search-based advertising. 42% of users self-report that they either do not know that it is possible to pay Google for preferred listings for one's company on the SERPs or do not know how to distinguish between organic results and ads. In the task-based user study, we found that only 1.3 percent of participants were able to mark all areas correctly. 9.6 percent had all their identifications correct but did not mark all results they were required to mark. For none of the screenshots given were more than 35% of users able to mark all areas correctly. In the experiment, we found that users who are not able to distinguish between the two results types choose ads around twice as often as users who can recognize the ads. The implications are that models of search engine advertising and of information seeking need to be amended, and that there is a severe need for regulating search-based advertising.

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.

Customized Web Search Rank Provision (개인화된 웹 검색 순위 생성)

  • Kang, Youngki;Bae, Joonsoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.2
    • /
    • pp.119-128
    • /
    • 2013
  • Most internet users utilize internet portal search engines, such as Naver, Daum and Google nowadays. But since the results of internet portal search engines are based on universal criteria (e.g. search frequency by region or country), they do not consider personal interests. Namely, current search engines do not provide exact search results for homonym or polysemy because they try to serve universal users. In order to solve this problem, this research determines keyword importance and weight value for each individual search characteristics by collecting and analyzing customized keyword at external database. The customized keyword weight values are integrated with search engine results (e.g. PageRank), and the search ranks are rearranged. Using 50 web pages of Goolge search results for experiment and 6 web pages for customized keyword collection, the new customized search results are proved to be 90% match. Our personalization approach is not the way that users enter preference directly, but the way that system automatically collects and analyzes personal information and then reflects them for customized search results.

Improving the quality of Search engine by using the Intelligent agent technolo

  • Nauyen, Ha-Nam;Choi, Gyoo-Seok;Park, Jong-Jin;Chi, Sung-Do
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.1093-1102
    • /
    • 2003
  • The dynamic nature of the World Wide Web challenges Search engines to find relevant and recent pages. Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Web in a reasonable amount of time. In this paper we study way spiders that should visit the URLs in order to obtain more “important” pages first. We define and apply several metrics, ranking formula for improving crawling results. The comparison between our result and Breadth-first Search (BFS) method shows the efficiency of our experiment system.

  • PDF

A design and implementation of the management system for number of keyword searching results using Google searching engine (구글 검색엔진을 활용한 키워드 검색결과 수 관리 시스템 설계 및 구현)

  • Lee, Ju-Yeon;Lee, Jung-Hwa;Park, Yoo-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.880-886
    • /
    • 2016
  • With lots of information occurring on the Internet, the search engine plays a role in gathering the scattered information on the Internet. Some search engines show not only search result pages including search keyword but also search result numbers of the keyword. The number of keyword searching result provided by the Google search engine can be utilized to identify overall trends for this search word on the internet. This paper is aimed designing and realizing the system which can efficiently manage the number of searching result provided by Google search engine. This paper proposed system operates by Web, and consist of search agent, storage node, and search node, manage keyword and search result, numbers, and executing search. The proposed system make the results such as search keywords, the number of searching, NGD(Normalized Google Distance) that is the distance between two keywords in Google area.

Detecting Intentionally Biased Web Pages In terms of Hypertext Information (하이퍼텍스트 정보 관점에서 의도적으로 왜곡된 웹 페이지의 검출에 관한 연구)

  • Lee Woo Key
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.1 s.33
    • /
    • pp.59-66
    • /
    • 2005
  • The organization of the web is progressively more being used to improve search and analysis of information on the web as a large collection of heterogeneous documents. Most people begin at a Web search engine to find information. but the user's pertinent search results are often greatly diluted by irrelevant data or sometimes appear on target but still mislead the user in an unwanted direction. One of the intentional, sometimes vicious manipulations of Web databases is a intentionally biased web page like Google bombing that is based on the PageRank algorithm. one of many Web structuring techniques. In this thesis, we regard the World Wide Web as a directed labeled graph that Web pages represent nodes and link edges. In the Present work, we define the label of an edge as having a link context and a similarity measure between link context and target page. With this similarity, we can modify the transition matrix of the PageRank algorithm. By suggesting a motivating example, it is explained how our proposed algorithm can filter the Web intentionally biased web Pages effective about $60\%% rather than the conventional PageRank.

  • PDF

Design and Implementation of Web Directory Engine Using Dynamic Category Hierarchy (동적분류에 의한 주제별 웹 검색엔진의 설계 및 구현)

  • Choi Bum-Ghi;Park Sun;Park Tae-Su;Song Jae-Won;Lee Ju-Hong
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.71-80
    • /
    • 2006
  • In web search engines, there are two main methods: directory searching and keyword searching. Keyword searching shows high recall rate but tends to come up with too many search results to find which users want to see the pages. Directory searching has also a difficulty to find the pages that users want in case of selecting improper category without knowing the exact category, that is, it shows high precision rates but low recall rates. We designed and implemented a new web search engine to resolve the problems of directory search method. It regards a category as a fuzzy set which contains keywords and calculate the degree of inclusion between categories. The merit of this method is to enhance the recall rate of directory searching by expanding subcategories on the basis of similarity.

  • PDF