• Title/Summary/Keyword: Web-caching

Search Result 121, Processing Time 0.039 seconds

Content-Aware Main Memory Web Caching (내용 기반의 메인 메모리 웹 캐쉬 할당 정책)

  • 염미령;노삼혁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10c
    • /
    • pp.244-246
    • /
    • 2001
  • 웹의 이용을 증가로 클라이언트의 요청이 급증하지만 웹 서버의 처리 능력은 한계에 다다르고 있다. 늘어나는 요청율에 응답하기 위해서 웹서버는 불필요한 오버해드는 피해야 한다. 정적 문서를 서비스하는 웹 서버의 가장 큰 오버해드는 디스크 액세스이다. 불필요한 디스크 접근을 피하기 위해 본 논문에서는 요청들에 대한 서비스 순서를 고려하는 메인 메모리 캐슁 정책을 제시하였다. Event-Driven 방식의 웹 서버에서 실험 결과 웹 서버의 성능을 향상시켰다.

  • PDF

An Adaptive Web Caching Server Based On User Access Patt (사용자 액세스 패턴을 이용한 웹 캐슁 서버)

  • 안수연;김명순;박병준;차호정
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.358-360
    • /
    • 2001
  • 본 논문은 웹을 이용하는 사용자들이 웹 문서 액세스 패턴을 파악하여 캐슁을 할 대상을 결정하고 관리하는 적응력이 있는 웹 캐슁 서버를 제안하고 구현한다. 빈번히 나타나는 순서열을 찾는 데이터 마이닝 기법을 캐슁 서버의 로그에 적용하여 순차적으로 액세스되는 웹 객체들을 찾아낸 다음, 필요한 경우 이들을 캐쉬 내에 선반입함으로써 히트율을 높이고, 따라서 캐쉬의 효율을 증가시킬 수 있는 캐슁 서버의 모델을 제시한다. 그리고 초기실험을 통하여, 제안된 캐슁 서버의 효율이 기존 캐슁서버에 비해 실제 상당히 증가함을 보였다.

Cooperative Caching of Web Server Cluster for Improving Cache Hit Rate (캐시 적중률 향상을 위한 웹 서버 클러스터의 협력적 캐싱)

  • 김희규;최창열;박기진;김성수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04d
    • /
    • pp.563-565
    • /
    • 2003
  • 최근 클러스터에 대한 연구는 내용 기반 클러스터의 부하 분배와 캐시 정책에 집중되고 있다. 본 논문에서는 웹 서비스의 고가용성 및 확장성을 제공하는 클러스터 환경에서 힌트 기반 협력적 캐싱의 캐시 적중률을 향상시키기 위해 기존의 DFR 기법을 개선하였다. 서비스 접근 확률을 이용하여 주 복사본과 종속 복사본을 선택적으로 제거하는 메모리 교체 방법을 제시하였으며, DFR 방식과 성능을 비교, 분석한 결과 DFR 방식보다 적은 디스크 접근률을 얻을 수 있었다.

  • PDF

Optimal Number and Placement of Web Proxies in the Internet : The Linear & Tree Topology (인터넷으로 웹 프락시의 최적 개수와 위치 : 선형 구조와 트리구조)

  • Choi, Jung-Im;Chung, Haeng-Eun;Lee, Sang-Kyu;Moon, Bong-Hee
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.229-235
    • /
    • 2001
  • With the explosive popularity of the World Wide Web, the low penonnance of network often leads web clients to wait a long time for web server's response. To resolve this problem, web caching (proxy) has been considered as the most efficient technique for web server to handle this problem. The placement of web proxy is critical to the overall penonnance, and Li et al. showed the optimal placement of proxies for a web server in the internet with the linear and tree topology when the number of proxies, ]M, is given [4, 5]. They focused on minimizing the over all access time. However, it is also considerable for target web server to minimize the total number of proxies while each proxy server guarantees not to exceed certain res(Xlnse time for each request from its clients. In this paper, we consider the problem of finding the optimal number and placement of web proxies with the lin~ar and tree topology under the given threshold cost for delay time.

  • PDF

A Caching Mechanism for Knowledge Maps (지식 맵을 위한 캐슁 기법)

  • 정준원;민경섭;김형주
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.3
    • /
    • pp.282-291
    • /
    • 2004
  • There has been many researches in TopicMap and RDF which are approach to handle data efficiently with metadata. However, No researches has been performed to service and implement except for presentation and description. In this paper, We suggest the caching mechanism to support an efficient access of knowledgemap and practical knowledgemap service with implementation of TopicMap system. First, We propose a method to navigate Knowledgemap efficiently that includes advantage of former methods. Then, To transmit TopicMap efficiently, We suggest caching mechanism for knowledgemap. This method is that user will be able to navigate knowledgemap efficiently in the viewpoint of human, not application. Therefor the mechanism doesn't cash topics by logical or physical locality but clustering by information and characteristic value of TopicMap. Lastly, we suggest replace mechanism by using graph structure of TopicMap for efficiency of transmission.

SBR-k(Sized-base replacement-k) : File Replacement in Data Grid Environments (SBR-k(Sized-based replacement-k) : 데이터 그리드 환경에서 파일 교체)

  • Park, Hong-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.57-64
    • /
    • 2008
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used), LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The proposed policy considers file size to reduce the number of files corresponding to a requested file rather than forecasting the uncertain future for replacement. The results of the simulation show that hit ratio was similar when the cache size was small, but the proposed policy was superior to traditional policies when the cache size was large.

Web-Cached Multicast Technique for on-Demand Video Distribution (주문형 비디오 분배를 위한 웹-캐슁 멀티캐스트 전송 기법)

  • Kim, Back-Hyun;Hwang, Tae-June;Kim, Ik-Soo
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.775-782
    • /
    • 2005
  • In this paper, we propose multicast technique in order to reduce the required network bandwidth by n times, by merging the adjacent multicasts depending on the number of HENs (Head-End-Nodes) n that request the same video. Allowing new clients to immediately join an existing multicast through patching improves the efficiency of the multicast and offers services without any initial latency. A client might have to download data through two channels simultaneously, one for multicast and the other for patching. The more the frequency of requesting the video is, the higher the probability of caching it among HENs increases. Therefore, the requests for the cached video data can be served by HENs. Multicast from server is generated when the playback time exceeds the amount of cached video data. Since the interval of multicast can be dynamically expanded according to the popularity of videos, it can be reduced the server's workload and the network bandwidth. We perform simulations to compare its performance with that of conventional multicast. From simulation results, we confirm that the Proposed multicast technique offers substantially better performance.

SPARQL Query Processing in Distributed In-Memory System (분산 메모리 시스템에서의 SPARQL 질의 처리)

  • Jagvaral, Batselem;Lee, Wangon;Kim, Kang-Pil;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1109-1116
    • /
    • 2015
  • In this paper, we propose a query processing approach that uses the Spark functional programming and distributed memory system to solve the computational overhead of SPARQL. In the semantic web, RDF ontology data is produced at large scale, and the main challenge for the semantic web is to query and manipulate such a large ontology with a high throughput. The most existing studies on SPARQL have focused on deploying the Hadoop MapReduce framework, and although approaches based on Hadoop MapReduce have shown promising results, they achieve a low level of throughput due to the underlying distributed file processes. Therefore, in order to speed up the query processes, we suggest query- processing methods that are based on memory caching in distributed memory system. Our approach is also integrated with a clause unification method for propagating between the clauses that exploits Spark join, map and filter methods along with caching. In our experiments, we have achieved a high level of performance relative to other approaches. In particular, our performance was nearly similar to that of Sempala, which has been considered to be the fastest query processing system.

An Empirical Study on the Construction Strategy of Web-caching Network (효과적인 웹-캐싱 네트웍 구축전략에 관한 실증 연구)

  • 이주헌;조병룡
    • The Journal of Information Technology and Database
    • /
    • v.8 no.2
    • /
    • pp.41-60
    • /
    • 2001
  • Despite the growth in Internet users, demand for multi-medial, large data files and resulting explosive growth in data traffic, there has been lack of investment in Middle-Mile, interconnection of various networks, resulting in bottleneck effect, which is acerbating. One strategy to overcome such network bottleneck is Content Delivery Network (CDN). CDN does not achieve efficient delivery of large file data through physical improvement/increase in network capacity, but by delivering large file contents, the cause of bottlenecks, from distributed servers. Since it is impracticable to physically improve networks capacity to accommodate the growth in internet traffic, CON, by strong CPs contents at cache servers deployed at major ISPs networks, is able to deliver requested contents to the requesting Web clients without the loss of data and long latency.

  • PDF

A Study on a Design of Efficient Electronic Commerce System

  • Ko Il-Seok;Shin Seung-Soo;Choi Seung-Kwon;Cho Yong-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.05a
    • /
    • pp.234-240
    • /
    • 2004
  • Now that the e-commerce users are explosively increasing, there is always followed by a sharp increase in load from the e-commerce system and the heavy traffic on the network, where leads to the delayed service for the client's request. The natural consequences contain decreasing customer satisfaction and weakening the business's competitive position in markets. Therefore, we'll need to study the e-commerce system in due consideration of the operational efficiency and response speed. This study includes a design of e-commerce system, with a hierarchical structure based on the local server, which has the capability of caching necessary to distribute the system load, makes a proposal concerning a split web-cache algorithm devoted to the local web server to finally give an analysis of the performance through the appropriate trial.

  • PDF