• Title/Summary/Keyword: Web page

Search Result 675, Processing Time 0.023 seconds

A Study on Reorganization of Web Site Based on Approach Using Page Popularity. (페이지 접근의 대중성에 따른 웹사이트 재구성에 관한 연구)

  • 조석팔
    • The Journal of Information Technology
    • /
    • v.3 no.2
    • /
    • pp.63-72
    • /
    • 2000
  • The peformance and quality of Web sites are often being estimated by the frequency that the users approach its site. This paper suggest how a link-editing method can automatically fix a poor organization by calculating each page's relative popularity, and how improve fix poor pages. Page's relative popularity depended on only cases where the objective Is to make it easier for a user to find the requested data; the faster the access, the better the organization of the Web sites according to tree depth.

  • PDF

Directory Web Service based on EJB (EJB 기반 Directory Web Service)

  • Kim, Jae-Chul;Heo, Tae-Wook;Kim, Sung-Soo;Kim, Kwang-Soo;Park, Jong-Hyun;Lee, Jong-Hun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11b
    • /
    • pp.781-784
    • /
    • 2003
  • 본 논문은 Directory Service(white page, yellow page, green page)를 분산환경에 적합하고 시스템 유연성을 고려하기 위해서 EJB(Enterprise Java Bean)기반 Webservice로 구현한 방법론 및 그 시스템에 관한 논문이다. 기존 개발된 Directory Service가 대부분 폐쇄적 네트워크를 사용하고 있으므로 시스템 자체에 유연성이 없는 정적인 시스템으로 구현이 되어있고, 플랫폼에 의존하는 아키텍쳐로 설계 되었다. 따라서 본 논문은 이러한 서비스 비개방 특성을 극복하기 위해서 개발 아키텍쳐를 웹서비스(Web Service)환경을 기본으로 하고 분산컴퓨팅환경(Distributed Computing Environment)의 특성을 고려하기 위해서 EJB로 개발이 되었다.

  • PDF

EXAMINING THE WAY OF PRESENTING RELIABLE INFORMATION ON WEB PAGE

  • Okamoto, Takuma;Yamaoka, Toshiki;Matsunobe, Takuo
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.05a
    • /
    • pp.131-135
    • /
    • 2001
  • Recently, Internet is used widely. Many Web sites, however, are not designed based on user's view. So, this research aimed at grasping the user needs and structure of Web site which user used easily. First, to grasp user needs, questionnaires about the motivation, the purpose and the evaluation items of Web page were done. As a result, we found the easiness for the user. Next, we made subjects operated test pages in which the number of classes and the amount of information were changed. We collected the quantitative data of the optimum number of classes, amount of information and retrieval time. As a result, there was a significant difference in each numerical value. The results of this research are available when constructing a web site. so, usability of Web site can be improved by them.

  • PDF

An Implementation of System for Detecting and Filtering Malicious URLs (악성 URL 탐지 및 필터링 시스템 구현)

  • Chang, Hye-Young;Kim, Min-Jae;Kim, Dong-Jin;Lee, Jin-Young;Kim, Hong-Kun;Cho, Seong-Je
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.405-414
    • /
    • 2010
  • According to the statistics of SecurityFocus in 2008, client-side attacks through the Microsoft Internet Explorer have increased by more than 50%. In this paper, we have implemented a behavior-based malicious web page detection system and a blacklist-based malicious web page filtering system. To do this, we first efficiently collected the target URLs by constructing a crawling system. The malicious URL detection system, run on a specific server, visits and renders actively the collected web pages under virtual machine environment. To detect whether each web page is malicious or not, the system state changes of the virtual machine are checked after rendering the page. If abnormal state changes are detected, we conclude the rendered web page is malicious, and insert it into the blacklist of malicious web pages. The malicious URL filtering system, run on the web client machine, filters malicious web pages based on the blacklist when a user visits web sites. We have enhanced system performance by automatically handling message boxes at the time of ULR analysis on the detection system. Experimental results show that the game sites contain up to three times more malicious pages than the other sites, and many attacks incur a file creation and a registry key modification.

Evaluation of Web Pages using User's Activities in a Page and Page Visiting Duration Time (사용자 활동과 폐이지 이용 시간을 이용한 웹 페이지 평가 기법)

  • Lee, Dong-Hun;Yun, Tae-Bok;Kim, Geon-Su;Lee, Ji-Hyeong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.99-102
    • /
    • 2007
  • 웹 사용 마이닝은 사용자의 웹 이용 패턴에 대해 분석하여 정보를 찾아내는 분야이다. 사용자에 대한 분석은 웹을 통한 비즈니스의 근간이 되고 있다. 때문에 웹 마이닝 분야에서 주목받고 중요시 되는 기술이 되었다. 그러나 최근에는 공개된 기술의 취약점을 이용해 악의적으로 정보를 교란하는 일이 발생되고 있어 사회적으로 이슈가 되고 있다. 이러한 문제는 특히 단순한 페이지 뷰 횟수에 기반을 둔 정보 추출 방식에 주로 발생하고 있다. 따라서 본 논문에서는 이러한 추출 방식의 단순함을 줄이고 사용자의 정보를 더 반영하기 위하여 페이지 이용 시간과 페이지 내의 행동을 분석하여 콘텐츠의 질을 평가하는 방안을 제시한다. 구현 부분에는 사용자의 개인정보 침해 없이 사용자의 행동을 수집하기 위하여 최근 인기를 얻고 있는 Ajax 기술을 사용하였다. 그리고 실시간으로 웹 페이지에 대한 평가를 수행하기 위해 서버에 로그 필터 모듈을 추가하는 수집 기법을 제안하였다.

  • PDF

A New Mobile Content Adaptation Based on Content Provider-Specified Web Clipping (컨텐츠 제공자 지정 웹 클리핑 방식의 이동 인터넷 컨텐츠 변환)

  • Yang, Seo-Min;Lee, Hyuk-Joon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.35-44
    • /
    • 2004
  • Web contents created for desktop screens give rise to problems when they are to be displayed on the small screens of mobile terminals. While in some cases some of the objects of a page may not be displayable due to the lack of browser capability, the entire page may not be displayable due to the incompatibility with the browser in other cases. In this paper, we introduce a new mobile content adaptation approach based on web clipping, which transforms an original page into one that is optimally displayed on a mobile terminal. In this method, a source page is automatically clipped and transformed according to the clip specification made by the content provider using a clip editing tool. The clip editing tool allows the user to specify group clips, multi-level cups and dynamic clips as well as simple clips, and the presentation layout through a graphic user interface. Based on the clip specifications, each clip is transformed into an intermediate meta-language document, which in turn is transformed into a presentation page in the target markup language. Transcoding of image objects in major image file formats is also supported.

A study of Postscript Converter using XSL-FO (XSL-FO를 이용한 PostScript Converter에 관한 연구)

  • 유동석;최호찬;이진영;김차종
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.109-112
    • /
    • 2003
  • Web documents specified with HTML and CSS is displayed with high quality on the web browser. However, basically, any printed pages don't have the same quality of softcopy. The reason is HTML and CSS is not suit for printing. The XSL-FO(XSL-Formatting Object) is Formatting Language for imaging of web document and The PostScript is one of the most famous PDL(Page Description Language). To get high-quality pages, we propose the design of converter which translate XML-FO into PostScript format. Using the designed converter, we can get hardcopies with high quality.

  • PDF

The Analysis Method based on the Business Model for Developing Web Application Systems (웹 응용 시스템 개발을 위한 업무모델 기반의 분석방법)

  • 조용선;정기원
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1193-1207
    • /
    • 2003
  • Various web applications are developed as the Internet is popularized in many fields. However, in most cases of web application development, systematic analysis is omitted and developers jump into the implementation. Therefore developers have difficulties with applying the development methods for a large scale project. The approach of creating an analysis models of a web application from a business model is proposed for the rapid and efficient development. The analysis process, tasks and techniques are proposed for this approach. The use case diagram and web page list are created from business modes that is depicted using the notation of UML activity diagram. The page diagram and logical / physical database models are created using the use case diagram and the web page list. These analysis models are refined during the detailed design phase. The efficiency of proposed method has been shown using a practical case study which reflects the development project of the web application for supporting the association of auto repair shops.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Embedded control system design by use of the Mobile web page (모바일 웹페이지를 이용한 임베디드 컨트롤러 시스템 설계)

  • 정운용;이재성;김선형
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.695-698
    • /
    • 2003
  • Recently consuming against the advancement and the mobile machinery and tools of the Internet is expanded and it uses the Internet and the demand against the technique which controls the machineries and tools of existing with remoteness is augmented. It made a mobile web browser from the dissertation which it sees consequently and web it led and the system which controls the machineries and tools which are connected to sleep the proposal it did. And the web control system which uses the mobile machinery and tools the home network, telematics the back the next continuous research is necessary because the application field is wide.

  • PDF