• Title/Summary/Keyword: Web Page

Search Result 675, Processing Time 0.033 seconds

The Ontology-based Web Navigation Guidance System (온톨로지 기반 웹 항해 안내 시스템)

  • Jung, Hyosook;Kim, Heejin;Min, Kyungsil;Park, Seongbin
    • The Journal of Korean Association of Computer Education
    • /
    • v.12 no.5
    • /
    • pp.95-103
    • /
    • 2009
  • In this paper, we propose a Web navigation guidance system which automatically provides a user with semantically related links based on an ontology. The system associates each web page to a concept in the ontology and creates new links between web pages by considering relationships of the concepts defined in the ontology. It focuses on enhancing web navigation by offering semantic links based on an ontology. We experimented the proposed system with 5th grade students who were performing tasks by searching Web pages and found that the degree of disorientation, the ratio of revisits for Web pages, and time spent for completing tasks for students in the experimental group were smaller than those for students in the control group. In addition, the task performance ratio for students in the experimental group were higher than that for students in the control group. It is expected that the proposed system can help design a navigable web site that is important in Web-based education.

  • PDF

Context-based Web Application Design (컨텍스트 기반의 웹 애플리케이션 설계 방법론)

  • Park, Jin-Soo
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.2
    • /
    • pp.111-132
    • /
    • 2007
  • Developing and managing Web applications are more complex than ever because of their growing functionalities, advancing Web technologies, increasing demands for integration with legacy applications, and changing content and structure. All these factors call for a more inclusive and comprehensive Web application design method. In response, we propose a context-based Web application design methodology that is based on several classification schemes including a Webpage classification, which is useful for identifying the information delivery mechanism and its relevant Web technology; a link classification, which reflects the semantics of various associations between pages; and a software component classification, which is helpful for pinpointing the roles of various components in the course of design. The proposed methodology also incorporates a unique Web application model comprised of a set of information clusters called compendia, each of which consists of a theme, its contextual pages, links, and components. This view is useful for modular design as well as for management of ever-changing content and structure of a Web application. The proposed methodology brings together all the three classification schemes and the Web application model to arrive at a set of both semantically cohesive and syntactically loose-coupled design artifacts.

  • PDF

Relevance of the Cyclomatic Complexity Threshold for the Web Programming (웹 프로그래밍을 위한 복잡도 한계값의 적정성)

  • Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.6
    • /
    • pp.153-161
    • /
    • 2012
  • In this empirical study at the Web environment based on the frequency distribution of the cyclomatic complexity number of the application, the relevance of the threshold has been analyzed with the next two assumptions. The upper bound established by McCabe in the procedural programming equals 10 and the upper bound established by Lopez in the Java programming equals 5. Which numerical value can be adapted to Web application contexts? In order to answer this 10 web site projects have been collected and a sample of more than 4,000 ASP files has been measured. After analyzing the frequency distribution of the cyclomatic complexity of the Web application, experiment result is that more than 90% of Web application have a complexity less than 50 and also 50 is proposed as threshold of Web application. Web application has the complex architecture with Server, Client and HTML, and the HTML side has the high complexity 35~40. The reason of high complexity is that HTML program is usually made of menu type for home page or site map, and the relevance of that has been explained. In the near future we need to find out if there exist some hidden properties of the Web application architecture related to complexity.

UML Sequence Diagram Based Test Case Extraction and Testing for Ensuring Reliability of Web Applications (웹 응용 신뢰성 확보를 위한 UML 순차도 기반의 시험사례 추출 및 시험)

  • 정기원;조용선
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.1
    • /
    • pp.1-19
    • /
    • 2004
  • The systematic testing is frequently regretted in recent web applications because of time and cost pressure. Moreover developers have difficulties with applying the traditional testing techniques to testing web application. The approach of creating test cases for a web application from a sequence model is proposed for the rapid and efficient testing. Test cases for web application are extracted from call messages (including self-call messages) of UML (Unified Modeling Language) sequence diagram. A test case consists of messages, script functions, or server pages and additional values. Moreover a simple testing tool for web application is proposed. A URL for testing web application is created and executed by this testing tool. The URL consists of server page address and additional values. This test tool is made using Microsoft Visual Basic. The efficiency of proposed method and tool has been shown using a practical case study which reflects the development project of the web application for supporting member management.

  • PDF

Classification of Malicious Web Pages by Using SVM (SVM을 활용한 악성 웹 페이지 분류)

  • Hwang, Young-Sup;Moon, Jae-Chan;Cho, Seong-Je
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.3
    • /
    • pp.77-83
    • /
    • 2012
  • As web pages provide various services, the distribution of malware via the web pages is being also increased. Malware can make personal information leak, system mal-function and system be zombie. To protect this damages, we should block the malicious web pages. Because the malicious codes embedded in web pages are obfuscated or transformed, it is difficult to detect them using signature-based approaches which are used by current anti-virus software. To overcome this problem, we extracted features to classify malicious web pages and benign ones by analyzing web pages. And we propose a classification method using SVM which is widely used in machine learning. Experimental results show that the proposed method is better than other methods. The proposed method could classify malicious web pages correctly and be helpful to block the distribution of malicious codes.

An extension of state transition graph for distributed environment (분산된 환경에서의 상태 전이 그래프의 확장)

  • Suh, Jin-Hyung;Lee, Wang-Heon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.71-81
    • /
    • 2010
  • In a typical web environment, it is difficult to determine the update and re-computation status of WebView content or the transition of WebView processing included in web page. If an update to one of data is performed before a read operation to that, we could get a wrong result due to the incorrect operation and increase a complexity of the problem to process. To solve this problem, lots of researchers have studied and most of these problems at the single user environment is not problems. However, the problems at a distributed environment might be occurred. For this reason, in this paper, we proposed the extended state transition graph and algorithms for each status of WebView for explaining WebView state in the distributed environment and analyze the performance of using the materialized WebView and not. Additionally, also analyze the timing issues in network and effectiveness which follows in size of WebView contents.

Web Server Hacking and Security Risk using DNS Spoofing and Pharming combined Attack (DNS 스푸핑을 이용한 포털 해킹과 파밍의 위험성)

  • Choi, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1451-1461
    • /
    • 2019
  • DNS spoofing is an attack in which an attacker intervenes in the communication between client and DNS server to deceive DNS server by responding to a fake IP address rather than actual IP address. It is possible to implement a pharming site that hacks user ID and password by duplicating web server's index page and simple web programming. In this paper we have studied web spoofing attack that combines DNS spoofing and pharming site implementation which leads to farming site. We have studied DNS spoofing attack method, procedure and farming site implementation method for portal server of this university. In the case of Kyungsung Portal, bypassing attack and hacking were possible even though the web server was SSL encrypted and secure authentication. Many web servers do not have security measures, and even web servers secured by SSL can be disabled. So it is necessary that these serious risks are to be informed and countermeasures are to be researched.

Development of an Internet-based Robot Education System

  • Hong, Soon-Hyuk;Jeon, Jae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.616-621
    • /
    • 2003
  • Until now, many networked robots have been connected to the Internet for the various applications. With these networked robots, very long distance teleoperation can be possible through the Internet. However, the promising area of the Internet-based teleoperation may be distance learning, because of several reasons such as the unpredictable characteristics of the Internet. In robotics class, students learn many theories about robots, but it is hard to perform the actual experiments for all students due to the rack of the real robots and safety problems. Some classes may introduce the virtual robot simulator for students to program the virtual robot and upload their program to operate the real robot through the off-line programming method. However, the students may also visit the laboratory when they want to use the real robot for testing their program. In this paper, we developed an Internet-based robot education system. The developed system was composed of two parts, the robotics class materials and the web-based Java3d robot simulator. That is, this system can provide two services for distance learning to the students through the Internet. The robotics class materials can be provided to the student as the multimedia contents on the web page. As well, the web-based robot simulator as the real experiment tool can help the students get good understanding about certain subject. So, the students can learn the required robotics theories and perform the real experiments from their web browser when they want to study themselves at any time.

  • PDF

Performance Evaluation of Scheduling Algorithms Using a Grid Toolkit(GridTool2) (그리드 툴킷인 GridTool2를 사용한 스케줄링 알고리즘의 성능 평가)

  • Kang, Oh-Han
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.3
    • /
    • pp.115-124
    • /
    • 2015
  • In this paper, we introduce a web-based scheduling toolkit(GridTool2), which can run simulation of scheduling algorithm in grid system. And we suggest new algorithms which apply additional communication costs to the existing MinMin and Suffrage scheduling algorithms. Since GridTool2 runs in web environment using server and database, it does not require a separate compiler or runtime environment. The GridTool2 allows variables such as communication costs on the web for performance evaluation, and shows simulation results on the web page. The new algorithm with communication costs was tested using GridTool2 to check for performance improvements. The results revealed that the new algorithm showed better performance as more workloads were incorporated to the system.

Post Ranking in a Blogosphere with a Scrap Function: Algorithms and Performance Evaluation (스크랩 기능을 지원하는 블로그 공간에서 포스트 랭킹 방안: 알고리즘 및 성능 평가)

  • Hwang, Won-Seok;Do, Young-Joo;Kim, Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.18D no.2
    • /
    • pp.101-110
    • /
    • 2011
  • According to the increasing use of blogs, a huge number of posts have appeared in a blogosphere. This causes web surfers to face difficulty in finding the quality posts in their search results. As a result, post ranking algorithms are required to help web serfers to effectively search for quality posts. Although there have been various algorithms proposed for web-page ranking, they are not directly applicable to post ranking since posts have their unique features different from those of web pages. In this paper, we propose post ranking algorithms that exploit actions performed by bloggers. We also evaluate the effectiveness of post ranking algorithms by performing extensive experiments using real-world blog data.