• Title/Summary/Keyword: Proxy caching

Search Result 87, Processing Time 0.06 seconds

An Improved Proxy Caching Management for Multimedia (멀티미디어를 위한 개선된 프락시 캐싱 관리 기법)

  • Hong, Byung-Chun;Cho, Kyung-San
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.427-430
    • /
    • 2003
  • 멀티미디어 특히 비디오 데이터는 대용량 파일 단위의 저장 특성을 가지며 사용자 접근 패턴이 기존의 햄 객체인 이미지 또는 텍스트와 다르기 때문에 기존의 웹 캐슁 정책은 멀티미디어 프락시 서버에는 적절하지 않다. 본 연구에서는 멀티미디어를 위한 프락시 서버의 캐쉬 히트율을 높이고 파일 단위 캐쉬 관리 정책을 세그먼트 관리에 적용하여 세그먼트 관리의 과부하를 줄이는 고정 크기 세그먼트 기법과 개선된 캐쉬 제거 기법을 제안하였다. 또한 시뮬레이션을 통해 제안 기법의 성능 개선 정도를 분석 제시하였다.

  • PDF

Proxy Caching using Access Patterns (액세스 패턴을 이용한 프락시 캐싱)

  • Lim, Jae-Hyun;Lee, Jun-Yeon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.04a
    • /
    • pp.586-590
    • /
    • 2000
  • 웹 프락시 통신량 특성은 캐싱, 용량 설계 및 모의 실험 연구에 영향을 미치는 매개 변수를 인식하는데 도움을 준다. 본 논문에서는 프락시 서버에 나타나는 통신량을 추적하여 사용자의 액세스 패턴을 분석하고 그에 따른 캐싱 모델을 수립하였다. 적중률과 가중치 적중률을 평가한 결과 캐싱의 유용성을 개선하였다.

  • PDF

Proxy Caching Scheme Based on the User Access Pattern Analysis for Series Video Data (시리즈 비디오 데이터의 접근 패턴에 기반한 프록시 캐슁 기법)

  • Hong, Hyeon-Ok;Park, Seong-Ho;Chung, Ki-Dong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1066-1077
    • /
    • 2004
  • Dramatic increase in the number of Internet users want highly qualified service of continuous media contents on the web. To solve these problems, we present two network caching schemes(PPC, PPCwP) which consider the characteristics of continuous media objects and user access pattern in this paper. While there are plenty of reasons to create rich media contents, delivering this high bandwidth contents over the internet presents problems such as server overload, network congestion and client-perceived latency. PPC scheme periodically calculates the popularity of objects based on the playback quantity and determines the optimal size of the initial fraction of a continuous media object to be cached in proportion to the calculated popularity. PPCwP scheme calculates the expected popularity using the series information and prefetches the expected initial fraction of newly created continuous media objects. Under the PPCwP scheme, the initial client-perceived latency and the data transferred from a remote server can be reduced and limited cache storage space can be utilized efficiently. Trace-driven simulation have been performed to evaluate the presented caching schemes using the log-files of iMBC. Through these simulations, PPC and PPCwP outperforms LRU and LFU in terms of BHR and DSR.

  • PDF

A Cache Management Technique for an Efficient Video Proxy Server (효율적인 비디오 프록시 서버를 위한 캐시 관리 방법)

  • Lee, Jun-Pyo;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.82-88
    • /
    • 2009
  • Video proxy server which is located near clients can store the frequently requested video data in storage space in order to minimize initial latency and network traffic significantly. However, due to the limited storage space in video proxy server, an appropriate video selection method is needed to store the videos which are frequently requested by users. Thus, we present a virtual caching technique to efficiently store the video in video proxy server. For this purpose, we employ a virtual memory in video poky server. If the video is requested by user, it is loaded in virtual memory first and then, delivered to the user. A video which is loaded in virtual memory is deleted or moved into the storage space of video poxy sewer depending on the request condition. In addition, virtual memory is divided into each segment area in order to store the segments efficiently and to avoid the fragmentation. The simulation results show that the proposed method performs better than other methods in terms of the block hit rate and the number of block deletion.

A Performance Improvement Scheme for a Wireless Internet Proxy Server Cluster (무선 인터넷 프록시 서버 클러스터 성능 개선)

  • Kwak, Hu-Keun;Chung, Kyu-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.415-426
    • /
    • 2005
  • Wireless internet, which becomes a hot social issue, has limitations due to the following characteristics, as different from wired internet. It has low bandwidth, frequent disconnection, low computing power, and small screen in user terminal. Also, it has technical issues to Improve in terms of user mobility, network protocol, security, and etc. Wireless internet server should be scalable to handle a large scale traffic due to rapidly growing users. In this paper, wireless internet proxy server clusters are used for the wireless Internet because their caching, distillation, and clustering functions are helpful to overcome the above limitations and needs. TranSend was proposed as a clustering based wireless internet proxy server but it has disadvantages; 1) its scalability is difficult to achieve because there is no systematic way to do it and 2) its structure is complex because of the inefficient communication structure among modules. In our former research, we proposed the All-in-one structure which can be scalable in a systematic way but it also has disadvantages; 1) data sharing among cache servers is not allowed and 2) its communication structure among modules is complex. In this paper, we proposed its improved scheme which has an efficient communication structure among modules and allows data to be shared among cache servers. We performed experiments using 16 PCs and experimental results show 54.86$\%$ and 4.70$\%$ performance improvement of the proposed system compared to TranSend and All-in-one system respectively Due to data sharing amount cache servers, the proposed scheme has an advantage of keeping a fixed size of the total cache memory regardless of cache server numbers. On the contrary, in All-in-one, the total cache memory size increases proportional to the number of cache servers since each cache server should keep all cache data, respectively.

A Scalable Cache Group Configuration Policy using Role-Partitioned Cache (캐쉬의 역할 구분을 이용한 확장성이 있는 캐쉬 그룹 구성 정책)

  • 현진일;민준식
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.3
    • /
    • pp.63-73
    • /
    • 2003
  • Today, in exponential growth of internet, the importance of file caching which could reduce the sun load, the volume of network traffic, and the latency of response has emerged. Actually, in one network, the traffic has reduced by using the cache and this means that file caching can improve the internet environment by cost a fraction of link upgrades. In this paper, we address a dynamic cache group configuration policy, to solve the scalable problem. The simulation result shows that the cache group using our proposal policy reduces the latency of response time and it means that out cache group configuration is more scalable than the static cache configuration.

  • PDF

Scheduling based on Cache Utilization in a Cache Server Cluster for Wireless Internet (무선 인터넷을 위한 캐시 서버 클러스터 환경에서 캐시 이용률 기반의 스케줄링)

  • Kwak, Hu-Keun;Chung, Kyu-Sik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.435-444
    • /
    • 2007
  • Caching web pages is an important part of web infrastructures. The effects of caching service are even more pronounced for wireless infrastructures due to their limited bandwidth. Medium to large-scale infrastructures deploy a cluster of servers to solve the scalability problem and hot spot problem inherent in caching. In this paper we present scheduling scheme based on cache utilization in a wireless internet proxy server cluster environment. The proposed method uses cache utilization for distributing evenly client requests to a cluster of cache servers and solving hot spot problem. We have implemented our approach and performed various experiments using publicly available traces. Experimental results on a cluster of 16 cache servers demonstrate that the proposed hashing method gives 45% to 114% Performance improvement over other widely used methods while addressing the hot spot problem.

Hot Spot Prediction Method for Improving the Performance of Consistent Hashing Shared Web Caching System (컨시스턴스 해슁을 이용한 분산 웹 캐싱 시스템의 성능 향상을 위한 Hot Spot 예측 방법)

  • 정성칠;정길도
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5B
    • /
    • pp.498-507
    • /
    • 2004
  • The fast and Precise service for the users request is the most important in the World Wide Web. However, the lest service is difficult due to the rapid increase of the Internet users recently. The Shared Web Caching (SWC) is one of the methods solving this problem. The performance of SWC is highly depend on the hit rate and the hit rate is effected by the memory size, processing speed of the server, load balancing and so on. The conventional load balancing is usually based on the state history of system, but the prediction of the state of the system can be used for the load balancing that will further improve the hit rate. In this study, a Hot Spot Prediction Method (HSPM) has been suggested to improve the throughputs of the proxy. The predicted hot spots, which is the item most frequently requested, should be predicted beforehand. The result show that the suggested method is better than the consistent hashing in the point of the load balancing and the hit rate.

Reducing Outgoing Traffic of Proxy Cache by Using Client-Cluster

  • Kim Kyung-Baek;Park Dae-Yeon
    • Journal of Communications and Networks
    • /
    • v.8 no.3
    • /
    • pp.330-338
    • /
    • 2006
  • Many web cache systems and policies concerning them have been proposed. These studies, however, consider large objects less useful than small objects in terms of performance, and evict them as soon as possible. Even if this approach increases the hit rate, the byte hit rate decreases and the connections occurring over congested links to outside networks waste more bandwidth in obtaining large objects. This paper puts forth a client-cluster approach for improving the web cache system. The client-cluster is composed of the residual resources of clients and utilizes them as exclusive storage for large objects. This proposed system achieves not only a high hit rate but also a high byte hit rate, while reducing outgoing traffic. The distributed hash table (DHT) based peer-to-peer lookup protocol is utilized to manage the client-cluster. With the natural characteristics of this protocol, the proposed system with the client-cluster is self-organizing, fault-tolerant, well-balanced, and scalable. Additionally, the large objects are managed with an index based allocation method, which balances the loads of all clients well. The performance of the cache system is examined via a trace driven simulation and an effective enhancement of the proxy cache performance is demonstrated.

Optimal Number and Placement of Web Proxies in the Internet : The Linear & Tree Topology (인터넷으로 웹 프락시의 최적 개수와 위치 : 선형 구조와 트리구조)

  • Choi, Jung-Im;Chung, Haeng-Eun;Lee, Sang-Kyu;Moon, Bong-Hee
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.229-235
    • /
    • 2001
  • With the explosive popularity of the World Wide Web, the low penonnance of network often leads web clients to wait a long time for web server's response. To resolve this problem, web caching (proxy) has been considered as the most efficient technique for web server to handle this problem. The placement of web proxy is critical to the overall penonnance, and Li et al. showed the optimal placement of proxies for a web server in the internet with the linear and tree topology when the number of proxies, ]M, is given [4, 5]. They focused on minimizing the over all access time. However, it is also considerable for target web server to minimize the total number of proxies while each proxy server guarantees not to exceed certain res(Xlnse time for each request from its clients. In this paper, we consider the problem of finding the optimal number and placement of web proxies with the lin~ar and tree topology under the given threshold cost for delay time.

  • PDF