• Title/Summary/Keyword: Cache Hit Ratio

Search Result 107, Processing Time 0.029 seconds

Hashing Method with Dynamic Server Information for Load Balancing on a Scalable Cluster of Cache Servers (확장성 있는 캐시 서버 클러스터에서의 부하 분산을 위한 동적 서버 정보 기반의 해싱 기법)

  • Hwak, Hu-Keun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.269-278
    • /
    • 2007
  • Caching in a cache sorrel cluster environment has an advantage that minimizes the request and response tine of internet traffic and web user. Then, one of the methods that increases the hit ratio of cache is using the hash function with cooperative caching. It is keeping a fixed size of the total cache memory regardless of the number of cache servers. On the contrary, if there is no cooperative caching, the total size of cache memory increases proportional to the number of cache sowers since each cache server should keep all the cache data. The disadvantage of hashing method is that clients' requests stress a few servers in all the cache servers due to the characteristics of hashing md the overall performance of a cache server cluster depends on a few servers. In this paper, we propose the method that distributes uniformly client requests between cache servers using dynamic server information. We performed experiments using 16 PCs. Experimental results show the uniform distribution o

Dynamic Probabilistic Caching Algorithm with Content Priorities for Content-Centric Networks

  • Sirichotedumrong, Warit;Kumwilaisak, Wuttipong;Tarnoi, Saran;Thatphitthukkul, Nattanun
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.695-706
    • /
    • 2017
  • This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache-hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache-hit ratio, and a cache-miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.

A Cache Replacement Strategy based on the Analysis of Request Patterns in Mobile Computing Environments (이동 컴퓨팅 환경에서 요구 패턴 분석을 기반으로 하는 캐쉬 대체 전략)

  • 이윤장;신동천
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.780-791
    • /
    • 2003
  • Caching is a useful technique to improve the response time by reducing contention of requests in mobile computing environments with a narrow bandwidth. in the traditional cache-based systems, to improve the hit ratio has been usually one of main concerns for the time. However, in mobile computing environments, it is necessary to consider the cost of cache miss as well as the hit ratio. In this paper, we propose a new cache replacement strategy in pull-based data dissemination systems. Then, we evaluate performance of the proposed strategy by a simulation approach. The proposed strategy considers both the popularity and the wating time together, so the page with the smallest value of multiplying popularity by waiting time is selected as a victim.

STP-FTL: An Efficient Caching Structure for Demand-based Flash Translation Layer

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.1-7
    • /
    • 2017
  • As the capacity of NAND flash module increases, the amount of RAM increases for caching and maintaining the FTL mapping information. In order to reduce the amount of mapping information managed in the RAM, a demand-based address mapping method stores the entire mapping information in the flash and some valid mapping information in the form of cache in the RAM so that the RAM can be used efficiently. However, when cache miss occurs, it is necessary to read the mapping information recorded in the flash, so overhead occurs to translate the address. If the RAM space is not enough, the cache hit ratio decreases, resulting in greater overhead. In this paper, we propose a method using two tables called TPMT(Translation Page Mapping Table) and SMT(Segmented Translation Page Mapping Table) to utilize both temporal locality and spatial locality more efficiently. A performance evaluation shows that this method can improve the cache hit ratio by up to 30% and reduces the extra translation operations by up to 72%, compared to the TPM scheme.

A Divided Scope Web Cache Replacement Technique Based on Object Reference Characteristics (객체 참조 특성 기반의 분할된 영역 웹 캐시 대체 기법)

  • Ko, Il-Seok;Leem, Chun-Seong;Na, Yun-Ji;Cho, Dong-Wook
    • The KIPS Transactions:PartC
    • /
    • v.10C no.7
    • /
    • pp.879-884
    • /
    • 2003
  • Generally we use web cache in order to increase performance of web base system, and a replacement technique has a great influence on performance of web cache. A web cache replacement technique is different from a replacement technique of memory scope, and a unit substituted for is web object Also, as for the web object, a variation of user reference characteristics is very great. Therefore, a web cache replacement technique can reflect enough characteristics of this web object. But the existing web caching techniques were not able to reflect enough these object reference characteristics. A principal viewpoint of this study is reference characteristic analysis, an elevation of an object hit rate, an improvement of response time. First of all we analyzed a reference characteristics of an web object by log analysis. And we divide web cache storage scope using the result of reference characteristics analysis. In the experiment result, we can confirm that performance of an object-hit ratio and a response speed was improved than a conventional technique about a proposal technique.

Performance Analysis of Adaptive Partition Cache Replacement using Various Monitoring Ratios for Non-volatile Memory Systems

  • Hwang, Sang-Ho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.1-8
    • /
    • 2018
  • In this paper, we propose an adaptive partition cache replacement policy and evaluate the performance of our scheme using various monitoring ratios to help lifetime extension of non-volatile main memory systems without performance degradation. The proposal combines conventional LRU (Least Recently Used) replacement policy and Early Eviction Zone (E2Z), which considers a dirty bit as well as LRU bits to select a candidate block. In particular, this paper shows the performance of non-volatile memory using various monitoring ratios and determines optimized monitoring ratio and partition size of E2Z for reducing the number of writebacks using cache hit counter logic and hit predictor. In the experiment evaluation, we showed that 1:128 combination provided the best results of writebacks and runtime, in terms of performance and complexity trade-off relation, and our proposal yielded up to 42% reduction of writebacks, compared with others.

Replacement Algorithm Selection Mechanism Considering File Size for Web Cache Server

  • Sontisiri, Tanasun;Sopechoke, Pawin;Thipchaksurat, Sakchai;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1084-1089
    • /
    • 2004
  • This paper describes the improvement of web cache server by scoping in replacement algorithm of data which are collected from the clients. We have found that each replacement algorithm is suitable for each type of data in the web pages. Therefore, we introduce the mechanism to select the replacement algorithm depending on the size of data called the Replacement Algorithm Selection Mechanism (RASM). RASM allows the web cache server to have the suitable replacement algorithm for each type of data. As the result, the byte hit ratio of web cache server can be increased and the congestion in the network can be alleviated.

  • PDF

Authenticated Handoff with Low Latency and Traffic Management in WLAN (무선랜에서 낮은 지연 특성을 가지는 인증유지 핸드오프 기법과 트래픽 관리 기법)

  • Choi Jae-woo;Nyang Dae-hun;Kang Jeon-il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.15 no.2
    • /
    • pp.81-94
    • /
    • 2005
  • Recently, wireless LAN circumstance is being widely deployed in Public spots. Many People use Portable equipments such as PDA and laptop computer for multimedia applications, and also demand of mobility support is increasing. However, handoff latency is inevitably occurred between both APs when clients move from one AP to another. To reduce handoff latency. in this paper, we suggest WFH(Weighted Frequent Handoff) using effective data structure. WFH improves cache hit ratio using a new cache replacement algorithm considering the movement pattern of users. It also reduces unessential duplicate traffics. Our algorithm uses FHR(Frequent Handoff Region) that can change pre-authentication lesion according to QoS based user level, movement Pattern and Neighbor Graph that dynamically captures network movement topology.

Analysis of Web Server Referencing Characteristics and performance Improvement of Web Server (웹 서버의 참조 특성 분석과 성능 개선)

  • Ahn, Hyo-Beom;Cho, Kyung-San
    • The KIPS Transactions:PartA
    • /
    • v.8A no.3
    • /
    • pp.201-208
    • /
    • 2001
  • Explosive growth of the Web and the non-uniform characteristics of client requests result in the performance degradation of Web servers, and server cache has been recognized as the solution. We analyzed Web server accessing characteristics-repetition, size, and locality of access. Based on the result, we analyzed the cache removal policies and proposed a prefetch strategy to improve the hit ratio of server caches. In addition, through the trace-driven simulation based on the traces from real Web sites, we showed the performance improvement by our proposal.

  • PDF

Efficient Buffer Allocation Policy for the Adaptive Block Replacement Scheme (적응력있는 블록 교체 기법을 위한 효율적인 버퍼 할당 정책)

  • Choi, Jong-Moo;Cho, Seong-Je;Noh, Sam-Hyuk;Min, Sang-Lyul;Cho, Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.3
    • /
    • pp.324-336
    • /
    • 2000
  • The paper proposes an efficient buffer management scheme to enhance performance of the disk I/O system. Without any user level information, the proposed scheme automatically detects the block reference patterns of applications by associating block attributes with forward distance of a block. Based on the detected patterns, the scheme applies an appropriate replacement policy to each application. We also present a new block allocation scheme to improve the performance of buffer cache when kernel needs to allocate a cache block due to a cache miss. The allocation scheme analyzes the cache hit ratio of each application based on block reference patterns and allocates a cache block to maximize cache hit ratios of system. These all procedures are performed on-line, as well as automatically at system level. We evaluate the scheme by trace-driven simulation. Experimental results show that our scheme leads to significant improvements in hit ratios of cache blocks compared to the traditional schemes and requires low overhead.

  • PDF