• Title/Summary/Keyword: URL link

Search Result 33, Processing Time 0.023 seconds

Link-E-Param : A URL Parameter Encryption Technique for Improving Web Application Security (Link-E-Param : 웹 애플리케이션 보안 강화를 위한 URL 파라미터 암호화 기법)

  • Lim, Deok-Byung;Park, Jun-Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9B
    • /
    • pp.1073-1081
    • /
    • 2011
  • An URL parameter can hold some information that is confidential or vulnerable to illegitimate tampering. We propose Link-E-Param(Link with Encrypted Parameters) to protect the whole URL parameter names as well as their values. Unlike other techniques concealing only some of the URL parameters, it will successfully discourage attacks based on URL analysis to steal secret information on the Web sites. We implement Link-E-Param in the form of a servlet filter to be deployed on any Java Web server by simply copying a jar file and setting a few configuration values. Thus it can be used for any existing Web application without modifying the application. It also supports numerous encryption algorithms to choose from. Experiments show that our implementation induces only 2~3% increase in user response time due to encryption and decryption, which is deemed acceptable.

The Comparison & Analysis of Linking System Using OpenURL (OpenURL을 이용한 전자자원 링킹시스템 비교·분석)

  • Kim, Seong-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.22 no.4 s.58
    • /
    • pp.221-234
    • /
    • 2005
  • This study describes the concept of link resolvers using OpenURL. Then, the study analyzed the commercially available link resolvers in terms of remote & local hosting, title list, customization of the services and usage statistics. The results will help the libraries select the appropriate link resolvers that are relevant to the features of the libraries.

A Study on the Link Server Development Using B-Tree Structure in the Big Data Environment (빅데이터 환경에서의 B-tree 구조 기반 링크정보 관리서버의 개발)

  • Park, Sungbum;Hwang, Jong Sung;Lee, Sangwon
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.75-82
    • /
    • 2015
  • Major corporations and portals have implemented a link server that connects Content Management Systems (CMS) to the physical address of content in a database (DB) to support efficient content use in web-based environments. In particular, a link server automatically connects the physical address of content in a DB to the content URL shown through a web browser screen, and re-connects the URL and the physical address when either is modified. In recent years, the number of users of digital content over the web has increased significantly because of the advent of the Big Data environment, which has also increased the number of link validity checks that should be performed in a CMS and a link server. If the link validity check is performed through an existing URL-based sequential method instead of petabyte or even etabyte environments, the identification rate of dead links decreases because of the degradation of validity check performance; moreover, frequent link checks add a large amount of workload to the DB. Hence, this study is aimed at providing a link server that can recognize URL link deletion or addition through analysis on the B-tree-based Information Identifier count per interval based on a large amount of URLs in order to resolve the existing problems. Through this study, the dead link check that is faster and adds lower loads than the existing method can be performed.

Document Ranking of Web Document Retrieval Systems (웹 정보검색 시스템의 문서 순위 결정)

  • An, Dong-Un;Kang, In-Ho
    • Journal of Information Management
    • /
    • v.34 no.2
    • /
    • pp.55-66
    • /
    • 2003
  • The Web is rich with various sources of information. It contains the contents of documents, multimedia data, shopping materials and so on. Due to the massive and heterogeneous web document collections, users want to find various types of target pages. We can classify user queries as three categories according to users'intent, content search, the site search, and the service search. In this paper, we present that different strategies are needed to meet the need of a user. Also we show the properties of content information, link information and URL information according to the class of a user query. In the content search, content information showed the good result. However, we lost the performance by combining link information and URL information. In the site search, we could increase the performance by combining link information and URL information.

Greedy Document Gathering Method Using Links and Clustering (Link와 Clustering을 이용한 적극적 문서 수집 기법)

  • 김원우;변영태
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.06a
    • /
    • pp.393-398
    • /
    • 2001
  • 특정 영역에 대해 사용자에게 관련 정보를 제공해 주는 서비스를 하는 정보 에이전트를 개발 중이다. 정보 에이전트는 사용자 질의 처리를 달은 Agent Manager와 지식베이스를 관리하는 KB Manager, 그리고 Web으로부터 해당 영역의 관련 문서를 끌어오는 Web Manager로 구성되어 있다. Web Manager는 방문할 URL을 수집하고, 이들 문서에 대한 관련 평가와 Indexing을 수행한다. Web Manager는 검색 엔진을 이용하거나, 방문한 문서의 link를 이용하여 URL을 수집하는데 이러한 URL수집기법은 많은 관련 문서를 놓치는 문제점이 있다. 이 문제점을 해결하기 위해서 해당 영역과 관련된 Site들을 대상으로 Link를 이용해 문서들을 모아와, 문서들을 TAG들의 패턴으로 얻어낸 문서 형식을 이용해 Clustering하며 관련 문서들의 Group을 찾아내는 적극적 문서 수집 기법을 제안한다. 실험 결과, Link와 Clustering을 이용할 경우 기존보다 효과적으로 관련 문서를 많이 수집할 수 있음을 알 수 있다.

  • PDF

A Method to Block Spam Mail Automatically Through the Connection to Link URL (링크 유알엘 접속을 통한 스팸메일 자동 차단 방법에 관한 연구)

  • Jung, Nam-Cheol
    • Journal of Digital Contents Society
    • /
    • v.8 no.4
    • /
    • pp.451-458
    • /
    • 2007
  • In this paper, I developed a method whereby spam mail is automatically blocked through the connection to link URL. The blocking system works as follows. First, the system extracts information of URL linked to electronic mail which was delivered from any server on the internet. Next, the system lets itself be connected to the web pages through this URL. Last, the system blocks the electronic mail if those web pages contain any key word which was defined as a clue to spam mail.

  • PDF

A Web Link Architecture Based on XRI Providing Persistent Link (영속적 링크를 제공하는 XRI 기반의 웹 링크 구조)

  • Jung, Eui-Hyun;Kim, Weon;Park, Chan-Ki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.247-253
    • /
    • 2008
  • Web 2.0 and Semantic Web technology will be merged to be a next generation Web that leads presentation-oriented Web to data-centric Web. In the next generation Web. semantic processing. Web Platform, and data fusion are most important technology factors. Resolving the Link Rot is the one of the essential technologies to enable these features. The Link Rot causes not only simple annoyances to users but also more serious problems including data integrity. loss of knowledge. breach of service. and so forth. We have suggested a new XRI-based persistent Web link architecture to cure the Link Rot that has been considered as a deep-seated Problem of the Web. The Proposed architecture is based on the XRI suggested by OASIS and it is designed to support a persistent link by using URL rewriting. Since the architecture is designed as a server-side technology, it is superior to existing research especially in Interoperability. Transparency and Adoptability. In addition to this, the architecture provides a metadata identification to be used fer context-aware link resolution.

  • PDF

An Implementation of the Speech-Library and Conversion Web-Services of the Web-Page for Speech-Recognition (음성인식을 위한 웹페이지 변환 웹서비스와 음성라이브러리 구현)

  • Oh, Jee-Young;Kim, Yoon-Joong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.478-482
    • /
    • 2006
  • This paper implemented speech-library and the Web Services that conversion the Web page for the speech recognition. The system is consisted of Web services consumer and Web services providers. The Web services consumer has libraries that Speech-library and proxy-library. The Speech -library has functions as follows from the user's speech extracted speech-data and searching the URL in link-table that is mapped with user's speech. The proxy-library calls two web services and is received the returning result. The Web services provider consisted of Parsing Web Services and Speech-Recognition Web Services. Parsing Web Services adds ActiveX control and reconstructs web page using the speech recognition. The speech recognizer is the web service providers that implemented in the previous study. As the result of experiment, we show that reconstructs web page and creates link-Table. Also searching the URL in link-table that is mapped with user's speech. Also confirmed returning the web page to user by searching URL in link-table that is mapped with the result of speech recognition web services.

  • PDF

A Study on Focused Crawling of Web Document for Building of Ontology Instances (온톨로지 인스턴스 구축을 위한 주제 중심 웹문서 수집에 관한 연구)

  • Chang, Moon-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.86-93
    • /
    • 2008
  • The construction of ontology defines as complicated semantic relations needs precise and expert skills. For the well defined ontology in real applications, plenty of information of instances for ontology classes is very critical. In this study, crawling algorithm which extracts the fittest topic from the Web overflowing over by a great number of documents has been focused and developed. Proposed crawling algorithm made a progress to gather documents at high speed by extracting topic-specific Link using URL patterns. And topic fitness of Link block text has been represented by fuzzy sets which will improve a precision of the focused crawler.

A Method of Link Extraction on Non-standard Links in Web Crawling (웹크롤러의 비표준 링크에 관한 링크 추출 방안)

  • Jeong, Jun-Yeong;Jang, Mun-Su;Gang, Seon-Mi
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.79-82
    • /
    • 2008
  • 웹크롤러는 웹페이지 내의 URL링크를 추적하여 다른 문서를 수집한다. 국내의 상당수 웹사이트는 웹 표준에 맞지 않는 링크방식으로 웹문서를 연결하고 있다. 일반적인 웹크롤러는 링크의 비표준적인 사용을 가정하지 않기 때문에 이러한 문서는 수집할 수 없다. 비표준적인 링크가 가능한 것은 사용자의 실수에 강인한 마크업 언어인 HTML에 자바스크립트 기능이 추가되면서 자바스크립트의 변칙적인 사용이 허용되었기 때문이다. 본 논문에서는 230여개의 웹사이트를 조사하여 기존 웹크롤러에서 해결하지 못한 링크 추출 문제를 찾아내고, 이를 수집하기 위한 알고리즘을 제안한다. 또한 자바스크립트 문제 해결을 위한 무거운 자바스크립트 엔진을 대신하여 필요한 기능만으로 구성된 모듈을 사용함으로써 효율적인 문서 수집기 모델을 제안한다.

  • PDF