• Title/Summary/Keyword: URL정보

Search Result 321, Processing Time 0.019 seconds

Fast URL Lookup Using URL Prefix Hash Tree (URL Prefix 해시 트리를 이용한 URL 목록 검색 속도 향상)

  • Park, Chang-Wook;Hwang, Sun-Young
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.1
    • /
    • pp.67-75
    • /
    • 2008
  • In this paper, we propose an efficient URL lookup algorithm for URL list-based web contents filtering systems. Converting a URL list into URL prefix form and building a hash tree representation of them, the proposed algorithm performs tree searches for URL lookups. It eliminates redundant searches of hash table method. Experimental results show that proposed algorithm is $62%{\sim}210%$ faster, depending on the number of segment, than conventional hash table method.

A study on non­storage data recording system and non­storage data providing method by smart QR code (스마트한 QR코드에 의한 비저장식 데이터 기록 시스템 및 비저장식 데이터 제공방법에 관한 연구)

  • Oh, Eun-Yeol
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.4
    • /
    • pp.14-20
    • /
    • 2019
  • The purpose of this paper is to present a smart QR code recording system and a method of non­storage data delivery that transforms the original data into a form of URL information by encrypting it and encoding the URL information into a QR code so that the QR code can be written and decrypted in a medium without storing the original data. The method of the study was presented by the prior art study and the literature research. Analysis results show that the system is built on the online administration server. The data input signal matching secret code is stored in DB, the QR code generation command converts input data from the password DB to the password information combined into the subordinate locator of the admin server's domain name, URL code. Therefore, the smart QR method of data management (recording and providing) indicates that there are no limitations in the ease and space of use or obstacles to capacity use.

A Study on analysis tools in the SWF file URL (SWF 파일의 URL정보 분석도구)

  • Jang, Dong-Hwan;Song, Yu-Jin;Lee, Jae-Yong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.5
    • /
    • pp.105-111
    • /
    • 2010
  • SWF(Shock Wave Flash) file is a format file for vector graphics produced by Adobe. It is widely used for a variety of contents such as advertising at websites, widgets, games, education, and videos and it contains various types of data such as sound sources, script, API and images. Many SWF files contain URL information on action script for communication in the network and they can be used as important research data as well as PC users' Web Browser history in terms of forensic investigation. And a decompiler for analyzing SWF files exists by which SWF files can be analysed and URL information can be verified. However, it takes a long time to verify the URL information on action scripts of multiple SWF files by the decompiler. In this paper, analysis of URL information on action scripts and extraction of URL information from multiple SWF files by designing analysis tools for URL information in SWF files is studied.

Evaluating Site-based URL Normalization (사이트 기반의 URL 정규화 평가)

  • Jeong, Hyo-Sook;Kim, Sung-Jin;Lee, Sang-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.28-30
    • /
    • 2005
  • URL 정규화는 다양하게 표현된 동일 URL들을 하나의 통일된(cannonical) 형태의 URL로 변환하는 과정이다. 동일문서에 대한 중복된 URL 표현은 URL 정규화를 통하여 제거된다. 표준 정규화는 잘못된 긍정(동일하지 않는 URL들을 동일 문자열로 변환)이 없도록 개발되었다. 그러나 표준 정규화는 많은 잘못된 부정이 발생하게 되므로, 잘못된 긍정을 일부 허용하면서 잘못된 부정을 현격히 줄일 수 있는 확장 정규화가 제기되고 연구되어 왔다. 본 논문에서는 동일 사이트 내의 URL들에 대한 확장 정규화의 적용 결과가 유사한 정도를 보임으로써, 한 사이트 내의 URL에 대한 임의의 확장 정규화 결과 정보가 동일 사이트 내의 다른 URL들의 정규화에 효과적으로 사용될 수 있음을 보인다. 이를 위하여, 한 사이트의 확장 정규화 결과 동일성 척도와 사이트 기반의 확장 정규화 평가 척도를 제안한다. 20,000만개의 실제 국내 웹 사이트에서 추출된 25만개의 URL에 대해 6가지 확장 정규화가 평가된다.

  • PDF

URL Signatures for Improving URL Normalization (URL 정규화 향상을 위한 URL 서명)

  • Soon, Lay-Ki;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.36 no.2
    • /
    • pp.139-149
    • /
    • 2009
  • In the standard URL normalization mechanism, URLs are normalized syntactically by a set of predefined steps. In this paper, we propose to complement the standard URL normalization by incorporating the semantically meaningful metadata of the web pages. The metadata taken into consideration are the body texts and the page size of the web pages, which can be extracted during HTML parsing. The results from our first exploratory experiment indicate that the body texts are effective in identifying equivalent URLs. Hence, given a URL which has undergone the standard normalization, we construct its URL signature by hashing the body text of the associated web page using Message-Digest algorithm 5 in the second experiment. URLs which share identical signatures are considered to be equivalent in our scheme. The results in the second experiment show that our proposed URL signatures were able to further reduce redundant URLs by 32.94% in comparison with the standard URL normalization.

A Study proposal for URL anomaly detection model based on classification algorithm (분류 알고리즘 기반 URL 이상 탐지 모델 연구 제안)

  • Hyeon Wuu Kim;Hong-Ki Kim;DongHwi Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.101-106
    • /
    • 2023
  • Recently, cyberattacks are increasing in social engineering attacks using intelligent and continuous phishing sites and hacking techniques using malicious code. As personal security becomes important, there is a need for a method and a solution for determining whether a malicious URL exists using a web application. In this paper, we would like to find out each feature and limitation by comparing highly accurate techniques for detecting malicious URLs. Compared to classification algorithm models using features such as web flat panel DB and based URL detection sites, we propose an efficient URL anomaly detection technique.

URL Normalization for Web Applications (웹 어플리케이션을 위한 URL 정규화)

  • Hong, Seok-Hoo;Kim, Sung-Jin;Lee, Sang-Ho
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.6
    • /
    • pp.716-722
    • /
    • 2005
  • In the m, syntactically different URLs could represent the same resource. The URL normalization is a process that transform a URL, syntactically different and represent the same resource, into canonical form. There are on-going efforts to define standard URL normalization. The standard URL normalization designed to minimize false negative while strictly avoiding false positive. This paper considers the four URL normalization issues beyond ones specified in the standard URL normalization. The idea behind our work is that in the URL normalization we want to minimize false negatives further while allowing false positives in a limited level. Two metrics are defined to analyze the effect of each step in the URL normalization. Over 170 million URLs that were collected in the real web pages, we did an experiment, and interesting statistical results are reported in this paper.

A Spam Filter System based on Maximum Entropy Model Using Spamness Features and URL Features (스팸성 자질과 URL 자질을 이용한 최대엔트로피모델 기반 스팸메일 필터 시스템)

  • Gong, Mi-Gyoung;Lee, Kyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.213-219
    • /
    • 2006
  • 본 논문에서는 스팸메일에 나타나는 스팸성 자질과 URL 자질을 이용한 최대엔트로피모델 기반 스팸 필터 시스템을 제안한다. 스팸성 자질은 스패머들이 스팸메일에 인위적으로 넣는 강조 패턴이나 필터 시스템을 통과하기 위해 비정상적으로 변형시킨 단어들을 말한다. 스팸성 자질 외에 반복적으로 나타나는 URL과 비정상적인 Ink도 자질로 사용하였다. 메일 수신자에게 추가적인 정보 제공을 목적으로 하이퍼링크로 연결시키거나 메일에 직접 타이핑한 URL 중 필터 시스템을 피하기 위해 유효하지 알은 비정상적인 URL들이 스팸 메일을 걸러내는데 도움을 줄 수 있기 때문이다. 또한 스팸성 자질과 URL을 각각 적용한 두 분류기를 통합하였다. 분류기의 통합은 각 분류기에 이용된 자질을 독립적으로 사용할 수 있다는 장점을 가지고 있다. 실험 결과를 통해 스팸성 자질과 URL을 이용함으로써 스팸 필터 시스템의 성능을 향상시킬 수 있음을 확인할 수 있었다.

  • PDF

OTACUS: Parameter-Tampering Prevention Techniques using Clean URL (OTACUS: 간편URL기법을 이용한 파라미터변조 공격 방지기법)

  • Kim, Guiseok;Kim, Seungjoo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.55-64
    • /
    • 2014
  • In a Web application, you can pass without restrictions special network security devices such as IPS and F/W, URL parameter, which is an important element of communication between the client and the server, is forwarded to the Web server. Parameters are modulated by an attacker requests a URL, disclose confidential information or through e-commerce, can take financial gain. Vulnerability parameter manipulation thereof cannot be able to determine whether to operate in only determined logical application, blocked with Web Application Firewall. In this paper, I will present a technique OTACUS(One-Time Access Control URL System) to complement the shortcomings of the measures existing approaches. OTACUS can be effectively blocked the modulation of the POST or GET method parameters passed to the server by preventing the exposure of the URL to the attacker by using clean URL technique simplifies complex URL that contains the parameter. Performance test results of the actual implementation OTACUS proves that it is possible to show a stable operation of less than 3% increase in the load.

Development of a Malicious URL Machine Learning Detection Model Reflecting the Main Feature of URLs (URL 주요특징을 고려한 악성URL 머신러닝 탐지모델 개발)

  • Kim, Youngjun;Lee, Jaewoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1786-1793
    • /
    • 2022
  • Cyber-attacks such as smishing and hacking mail exploiting COVID-19, political and social issues, have recently been continuous. Machine learning and deep learning technology research are conducted to prevent any damage due to cyber-attacks inducing malicious links to breach personal data. It has been concluded as a lack of basis to judge the attacks to be malicious in previous studies since the features of data set were excessively simple. In this paper, nine main features of three types, "URL Days", "URL Word", and "URL Abnormal", were proposed in addition to lexical features of URL which have been reflected in previous research. F1-Score and accuracy index were measured through four different types of machine learning algorithms. An improvement of 0.9% in a result and the highest value, 98.5%, were examined in F1-Score and accuracy through comparatively analyzing an existing research. These outcomes proved the main features contribute to elevating the values in both accuracy and performance.