• Title/Summary/Keyword: Client Side Pages

Search Result 6, Processing Time 0.02 seconds

Execution-based System and Its Performance Analysis for Detecting Malicious Web Pages using High Interaction Client Honeypot (고 상호작용 클라이언트 허니팟을 이용한 실행 기반의 악성 웹 페이지 탐지 시스템 및 성능 분석)

  • Kim, Min-Jae;Chang, Hye-Young;Cho, Seong-Je
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.1003-1007
    • /
    • 2009
  • Client-side attacks including drive-by download target vulnerabilities in client applications that interact with a malicious server or process malicious data. A typical client-side attack is web-based one related to a malicious web page exploiting specific browser vulnerability that can execute mal ware on the client system (PC) or give complete control of it to the malicious server. To defend those attacks, this paper has constructed high interaction client honeypot system using Capture-HPC that adopts execution-based detection in virtual machine. We have detected and classified malicious web pages using the system. We have also analyzed the system's performance in terms of the number of virtual machine images and the number of browsers executed simultaneously in each virtual machine. Experimental results show that the system with one virtual machine image obtains better performance with less reverting overhead. The system also shows good performance when the number of browsers executed simultaneously in a virtual machine is 50.

An Implementation of System for Detecting and Filtering Malicious URLs (악성 URL 탐지 및 필터링 시스템 구현)

  • Chang, Hye-Young;Kim, Min-Jae;Kim, Dong-Jin;Lee, Jin-Young;Kim, Hong-Kun;Cho, Seong-Je
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.405-414
    • /
    • 2010
  • According to the statistics of SecurityFocus in 2008, client-side attacks through the Microsoft Internet Explorer have increased by more than 50%. In this paper, we have implemented a behavior-based malicious web page detection system and a blacklist-based malicious web page filtering system. To do this, we first efficiently collected the target URLs by constructing a crawling system. The malicious URL detection system, run on a specific server, visits and renders actively the collected web pages under virtual machine environment. To detect whether each web page is malicious or not, the system state changes of the virtual machine are checked after rendering the page. If abnormal state changes are detected, we conclude the rendered web page is malicious, and insert it into the blacklist of malicious web pages. The malicious URL filtering system, run on the web client machine, filters malicious web pages based on the blacklist when a user visits web sites. We have enhanced system performance by automatically handling message boxes at the time of ULR analysis on the detection system. Experimental results show that the game sites contain up to three times more malicious pages than the other sites, and many attacks incur a file creation and a registry key modification.

Reconstructing Web Broadcasting Information based on User Retrieval Pattern (무선 환경에서 사용자 검색 성향을 반영한 웹 방송 정보 재구성 기법)

  • Kim, Won-Cheol;Lee, Soo-Cheol;Hwang, Een-Jun;Byeon, Kwang-Jun
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1149-1158
    • /
    • 2004
  • Today the fastest growing communities of web users are mobile visitors who browse web page with wireless PDAs and cellular phones. However, most web pages are optimiaed exclusively for desktop clients on the broadband network and are inconvenient to users with small screen mobile devices. They display only a few lines of text and cannot run client-side programs or scripts due to lack of system resource. Even worse, their connections are usually slow to support most of the data-intensive applications. In this paper, we propose a pageslet scheme that makes it feasible to browse ordinary web pages on small screen mobile devices. It extracts broadcasting sections of user preference from broadcasting web pages and automatically reorganizes the extracted sections for convenient browsing on mobile devices.

Automated Functionality Test Methods for Web-based Applications (웹 기반 어플리케이션의 기능 테스트 자동화 방법)

  • Kuk, Seung-Hak;Kim, Hyeon-Soo
    • The KIPS Transactions:PartD
    • /
    • v.14D no.5
    • /
    • pp.517-530
    • /
    • 2007
  • Recently web applications have growl rapidly and have become more and more complex. As web applications become more complex, there is a growing concern about their quality. But very little attentions are paid to web applications testing and there are scarce of the practical research efforts and tools. Thus, in this paper, we suggest the automated testing methods for web applications. For this, the methods generate an analysis model by analyzing the HTML codes and the source codes. Then test targets are identified and test cases are extracted from the analysis model. In addition, test drivers and test data are generated automatically, and then they are depleted on the web server to establish a testing environment. Through this process we can automate the testing processes for web applications, besides the automated methods makes our approach more effective than the existing research efforts.

Development Information management system over WWW using ASP (Active Server Pages를 이용한 Web 응용 정보관리시스템 개발)

  • 오충헌;정석찬;진현수;조규갑
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.766-769
    • /
    • 2000
  • In recent years, Internet/Intranet represents the next generation of computing environment. Therefore, it is necessary to integrate WWW((World Wide Web) over internet/intranet and DBMS(Database Management System) in proportion to the increase of various users' request and an amount of data variety. Also, also full variety of service over WWW need to accomplish rapidly modification of bug and make a betterment of system according to a feature of rapid change of information and an importance of interaction with users. The typical CGI method commonly used to connect with database has a disadvantage in time and resources in system due to frequent connection with database. Therefore, this paper provides the conceptual structure and implementation of information management system over WWW with applying a recent information technology called ASP(Active Server Pages) that controls and arranges client logic dynamically in server-side and introducing a concept of working group and folder to database design.

  • PDF

Taint Inference for Cross-Site Scripting in Context of URL Rewriting and HTML Sanitization

  • Pan, Jinkun;Mao, Xiaoguang;Li, Weishi
    • ETRI Journal
    • /
    • v.38 no.2
    • /
    • pp.376-386
    • /
    • 2016
  • Currently, web applications are gaining in prevalence. In a web application, an input may not be appropriately validated, making the web application susceptible to cross-site scripting (XSS), which poses serious security problems for Internet users and websites to whom such trusted web pages belong. A taint inference is a type of information flow analysis technique that is useful in detecting XSS on the client side. However, in existing techniques, two current practical issues have yet to be handled properly. One is URL rewriting, which transforms a standard URL into a clearer and more manageable form. Another is HTML sanitization, which filters an input against blacklists or whitelists of HTML tags or attributes. In this paper, we make an analogy between the taint inference problem and the molecule sequence alignment problem in bioinformatics, and transfer two techniques related to the latter over to the former to solve the aforementioned yet-to-be-handled-properly practical issues. In particular, in our method, URL rewriting is addressed using local sequence alignment and HTML sanitization is modeled by introducing a removal gap penalty. Empirical results demonstrate the effectiveness and efficiency of our method.