• Title/Summary/Keyword: web pages

Search Result 553, Processing Time 0.029 seconds

Empirical Analysis on the Effect of Design Pattern of Web Page, Perceived Risk and Media Richness to Customer Satisfaction (콘텐츠 제작방식, 지각된 위험, 미디어 풍부성이 고객만족에 미치는 영향 분석)

  • Park, Bong-Won;Lee, Jung-Mann;Lee, Jong-Won
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.385-396
    • /
    • 2011
  • Internet web pages can be classified by three major types such as texts only, images with texts and videos with texts. The purpose of this paper is to analyze how customers recognize and respond perspective of perceived risk and media richness with regard to design patterns of internet web pages. Additionally, we will examine the extent to which aforementioned factors affect customer satisfaction. Analyses with perceived risks revealed that customers feel less personal risks including performance, psychology and time/convenience when used web pages of text-images and text-videos, compared to text only based web pages. However, customers feel that web pages consisting of image-text or video-text have higher points in terms of symbolism and social presence in media richness, compared to text only based web pages. Finally, we showed that personal risk and text-based Web page negatively affect but symbolism and social presence positively impact on customer satisfaction. Therefore, this study suggests a clue that why video-based Web content did not grow different from many people's expectation.

Coupling Metrics for Web Pages Clustering in Restructuring of Web Applications (웹 어플리케이션 재구조화를 위한 클러스터링에 사용되는 결합도 메트릭)

  • Lee, En-Joo;Park, Gen-Duk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.75-84
    • /
    • 2007
  • Due to the increasing complexity and shorter life cycle of web applications, web applications need to be restructured to improve flexibility and extensibility. These days approaches are being used where systems are understood and restructured through clustering techniques. In this paper, the coupling metrics are proposed for clustering web pages more effectively. To achieve this, web application models are defined, where the relationship between web pages and the numbers of parameters are included. Considering direct and indirect coupling strength based on these models, coupling metrics are defined. The more direct relations between two pages and the more parameters they have, the stronger direct coupling is. The higher indirect connectivity strength between two pages is, the more similar the patterns of relationships among other web pages are. We verify the suggested metrics according to the well known verification framework and provide a case study to show that our metrics complements some existing metrics.

  • PDF

A Web Surfing Assistant for Improved Web Accessibility (웹 접근성 향상을 위한 웹 서핑 도우미)

  • Lee SooCheol;Lee Sieun;Hwang Eenjun
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1180-1195
    • /
    • 2004
  • Due to the exponential increase of information, search and access for the Web information or service takes much time. Web information is represented through several web pages using hyperlinks and each web page is contains several topics. However. most existing web tools don't reflect such web authoring tendencies and treat it as an independent information unit. This inconsistency yields inherent problems in web browsing and searching. In this paper, we propose a web surfing assistant called LinkBroker that provides collodion pages. They are composed of relevant information extracted from several web pages that have table and frame structure in order to improve accessibility to web information. Especially, the system extracts a set of web pages that are logically connected and groups those pages using table and frame tags. Then, essential information blocks in each page of a group are extracted to construct an integrated summary page. It Provides a comprehensive view to user and one cut way to access distributed information. Experimental results show the effectiveness and usefulness of LinkBroker system.

Web Page Similarity based on Size and Frequency of Tokens (토큰 크기 및 출현 빈도에 기반한 웹 페이지 유사도)

  • Lee, Eun-Joo;Jung, Woo-Sung
    • Journal of Information Technology Services
    • /
    • v.11 no.4
    • /
    • pp.263-275
    • /
    • 2012
  • It is becoming hard to maintain web applications because of high complexity and duplication of web pages. However, most of research about code clone is focusing on code hunks, and their target is limited to a specific language. Thus, we propose GSIM, a language-independent statistical approach to detect similar pages based on scarcity and frequency of customized tokens. The tokens, which can be obtained from pages splitted by a set of given separators, are defined as atomic elements for calculating similarity between two pages. In this paper, the domain definition for web applications and algorithms for collecting tokens, making matrics, calculating similarity are given. We also conducted experiments on open source codes for evaluation, with our GSIM tool. The results show the applicability of the proposed method and the effects of parameters such as threshold, toughness, length of tokens, on their quality and performance.

A Study on Effective Internet Data Extraction through Layout Detection

  • Sun Bok-Keun;Han Kwang-Rok
    • International Journal of Contents
    • /
    • v.1 no.2
    • /
    • pp.5-9
    • /
    • 2005
  • Currently most Internet documents including data are made based on predefined templates, but templates are usually formed only for main data and are not helpful for information retrieval against indexes, advertisements, header data etc. Templates in such forms are not appropriate when Internet documents are used as data for information retrieval. In order to process Internet documents in various areas of information retrieval, it is necessary to detect additional information such as advertisements and page indexes. Thus this study proposes a method of detecting the layout of Web pages by identifying the characteristics and structure of block tags that affect the layout of Web pages and calculating distances between Web pages. This method is purposed to reduce the cost of Web document automatic processing and improve processing efficiency by providing information about the structure of Web pages using templates through applying the method to information retrieval such as data extraction.

  • PDF

Design And Implementation Of A Lecture Supporting Web Site Construction System Using Remote Execution Techniques (원격실행 기술을 이용한 강의지원 웹사이트 자동생성시스템 설계 및 구현)

  • Im, In-Taek;Kim, Jae-Il;Song, Gyu-Baek;Kim, Jong-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1911-1922
    • /
    • 2000
  • Recently, various web page development tools for both the beginner and the experienced user are introduced. These tools allow them to generate web pages easily and quickly. However the web pages generated by the tools have lots of functional limitations. Generally authors must have much knowledge for web authoring tools, HTML, CGI programming to open web sites for special purpose. Especially, most of the lecture supporting web site necessarily requires much effort to construct it as well as special functions using CGI, Javascript, Java Applet, etc. to generate dynamic web pages. In order to solve above mentioned limitations, we design and implement an automatic web site construction system using RASIS based on remote execution technologies.

  • PDF

Measuring Web Page Similarity using Tags (태그를 이용한 웹 페이지간의 유사도 측정 방법)

  • Kang, Sang-Wook;Lee, Ki-Yong;Kim, Hyeon-Gyu;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.37 no.2
    • /
    • pp.104-112
    • /
    • 2010
  • Social bookmarking is one of the most interesting trends in the current web environment. In a social bookmarking system, users annotate a web page with tags, which describe the contents of the page. Numerous studies have been done using this information, mostly on enhancing the quality of web search. In this paper, we use this information to measure the semantic similarity between two web pages. Since web pages consist of various types of multimedia data, it is quite difficult to compare the semantics of two web pages by comparing the actual data contained in the pages. With the help of social bookmarks, this comparison can be performed very effectively. In this paper, we propose a new similarity measure between web pages, called Web Page Similarity Based on Entire Tags (WSET), based on social bookmarks. The experimental results show that the proposed measure yields more satisfactory results than the previous ones.

INFORMATION SEARCH BASED ON CONCEPT GRAPH IN WEB

  • Lee, Mal-Rey;Kim, Sang-Geun
    • Journal of applied mathematics & informatics
    • /
    • v.10 no.1_2
    • /
    • pp.333-351
    • /
    • 2002
  • This paper introduces a search method based on conceptual graph. A hyperlink information is essential to construct conceptual graph in web. The information is very useful as it provides summary and further linkage to construct conceptual graph that has been provided by human. It also has a property which shows review, relation, hierarchy, generality, and visibility. Using this property, we extracted the keywords of web documents and made up of the conceptual graph among the keywords sampled from web pages. This paper extracts the keywords of web pages using anchor text one out of hyperlink information and makes hyperlink of web pages abstract as the link relation between keywords of each web page. 1 suggest this useful search method providing querying word extension or domain knowledge by conceptual graph of keywords. Domain knowledge was conceptualized knowledged as the conceptual graph. Then it is not listing web documents which is the defect of previous search system. And it gives the index of concept associating with querying word.

Mining Parallel Text from the Web based on Sentence Alignment

  • Li, Bo;Liu, Juan;Zhu, Huili
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.285-292
    • /
    • 2007
  • The parallel corpus is an important resource in the research field of data-driven natural language processing, but there are only a few parallel corpora publicly available nowadays, mostly due to the high labor force needed to construct this kind of resource. A novel strategy is brought out to automatically fetch parallel text from the web in this paper, which may help to solve the problem of the lack of parallel corpora with high quality. The system we develop first downloads the web pages from certain hosts. Then candidate parallel page pairs are prepared from the page set based on the outer features of the web pages. The candidate page pairs are evaluated in the last step in which the sentences in the candidate web page pairs are extracted and aligned first, and then the similarity of the two web pages is evaluate based on the similarities of the aligned sentences. The experiments towards a multilingual web site show the satisfactory performance of the system.

  • PDF

Predicting Interesting Web Pages by SVM and Logit-regression (SVM과 로짓회귀분석을 이용한 흥미있는 웹페이지 예측)

  • Jeon, Dohong;Kim, Hyoungrae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.3
    • /
    • pp.47-56
    • /
    • 2015
  • Automated detection of interesting web pages could be used in many different application domains. Determining a user's interesting web pages can be performed implicitly by observing the user's behavior. The task of distinguishing interesting web pages belongs to a classification problem, and we choose white box learning methods (fixed effect logit regression and support vector machine) to test empirically. The result indicated that (1) fixed effect logit regression, fixed effect SVMs with both polynomial and radial basis kernels showed higher performance than the linear kernel model, (2) a personalization is a critical issue for improving the performance of a model, (3) when asking a user explicit grading of web pages, the scale could be as simple as yes/no answer, (4) every second the duration in a web page increases, the ratio of the probability to be interesting increased 1.004 times, but the number of scrollbar clicks (p=0.56) and the number of mouse clicks (p=0.36) did not have statistically significant relations with the interest.