• Title/Summary/Keyword: Data cache

Search Result 488, Processing Time 0.027 seconds

SBR-k(Sized-base replacement-k) : File Replacement in Data Grid Environments (SBR-k(Sized-based replacement-k) : 데이터 그리드 환경에서 파일 교체)

  • Park, Hong-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.57-64
    • /
    • 2008
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used), LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The proposed policy considers file size to reduce the number of files corresponding to a requested file rather than forecasting the uncertain future for replacement. The results of the simulation show that hit ratio was similar when the cache size was small, but the proposed policy was superior to traditional policies when the cache size was large.

Low-Power Data Cache Architecture and Microarchitecture-level Management Policy for Multimedia Application (멀티미디어 응용을 위한 저전력 데이터 캐쉬 구조 및 마이크로 아키텍쳐 수준 관리기법)

  • Yang Hoon-Mo;Kim Cheong-Gil;Park Gi-Ho;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.3 s.100
    • /
    • pp.191-198
    • /
    • 2006
  • Today's portable electric consumer devices, which are operated by battery, tend to integrate more multimedia processing capabilities. In the multimedia processing devices, multimedia system-on-chips can handle specific algorithms which need intensive processing capabilities and significant power consumption. As a result, the power-efficiency of multimedia processing devices becomes important increasingly. In this paper, we propose a reconfigurable data caching architecture, in which data allocation is constrained by software support, and evaluate its performance and power efficiency. Comparing with conventional cache architectures, power consumption can be reduced significantly, while miss rate of the proposed architecture is very similar to that of the conventional caches. The reduction of power consumption for the reconfigurable data cache architecture shows 33.2%, 53.3%, and 70.4%, when compared with direct-mapped, 2-way, and 4-way caches respectively.

Cache-Filter: A Cache Permission Policy for Information-Centric Networking

  • Feng, Bohao;Zhou, Huachun;Zhang, Mingchuan;Zhang, Hongke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.4912-4933
    • /
    • 2015
  • Information Centric Networking (ICN) has recently attracted great attention. It names the content decoupling from the location and introduces network caching, making the content to be cached anywhere within the network. The benefits of such design are obvious, however, many challenges still need to be solved. Among them, the local caching policy is widely discussed and it can be further divided into two parts, namely the cache permission policy and the cache replacement policy. The former is used to decide whether an incoming content should be cached while the latter is used to evict a cached content if required. The Internet is a user-oriented network and popular contents always have much more requests than unpopular ones. Caching such popular contents closer to the user's location can improve the network performance, and consequently, the local caching policy is required to identify popular contents. However, considering the line speed requirement of ICN routers, the local caching policy whose complexity is larger than O(1) cannot be applied. In terms of the replacement policy, Least Recently Used (LRU) is selected as the default one for ICN because of its low complexity, although its ability to identify the popular content is poor. Hence, the identification of popular contents should be completed by the cache permission policy. In this paper, a cache permission policy called Cache-Filter, whose complexity is O(1), is proposed, aiming to store popular contents closer to users. Cache-Filter takes the content popularity into account and achieves the goal through the collaboration of on-path nodes. Extensive simulations are conducted to evaluate the performance of Cache-Filter. Leave Copy Down (LCD), Move Copy Down (MCD), Betw, ProbCache, ProbCache+, Prob(p) and Probabilistic Caching with Secondary List (PCSL) are also implemented for comparison. The results show that Cache-Filter performs well. For example, in terms of the distance to access to contents, compared with Leave Copy Everywhere (LCE) used by Named Data Networking (NDN) as the permission policy, Cache-Filter saves over 17% number of hops.

Performance Analysis of Multicore Processor Architectures Based On Cache Size Effects (캐쉬 용량 효과에 대한 멀티코어 프로세서의 성능 연구)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.6
    • /
    • pp.175-180
    • /
    • 2012
  • In order to overcome the complexity and performance limit problems of superscalar processors, the multicore architecture has been prevalent recently. The configuration and the size of instruction and data caches greatly gives effect on the performance of multicore processors. Using SPEC 2000 benchmarks as input, the trace-driven simulation has been performed for the 2-core to 16-core architectures with different sizes of caches extensively. As a result, the 2-way set associative instruction and data cache with the size of 64KB brought the best cost-effective performance.

Web Caching Strategy based on Documents Popularity (선호도 기반 웹 캐싱 전략)

  • Yoo, Hae-Young;Park, Chel
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.530-538
    • /
    • 2002
  • In this paper, we propose a new caching strategy for web servers. The proposed algorithm collects on]y the statistics of the requested file, for example the popularity, when a request arrives. And, at times, only files with higher popularity are cached all together. Because the cache remains unchanged until the cache is made newly, web server can use very efficient data structure for cache to determine whether a file is in the cache or not. This increases greatly tile efficiency of cache manipulation. Furthermore, the experiment that is performed with real log files built by web servers shows that the cache hit ratio and the cache hit ratio are better than those produced by LRU. The proposed algorithm has a drawback such that the cache hit ratio may decrease when the popularity of files that is not in the cache explodes instantaneously. But in our opinion, such explosion happens infrequently, and it is easy to implement the web servers to adapt them to such unusual cases.

Performance Analysis of the Small Write Problem on the Cached RAID5 (캐쉬를 이용한 RAID5 상에서 작은 쓰기 문제의 성능 분석)

  • Choi, Hwang-Kyu;Seo, Ju-Ha;Lee, Seung-Taek
    • Journal of Industrial Technology
    • /
    • v.15
    • /
    • pp.103-111
    • /
    • 1995
  • In this paper we evaluate the performance of the cached RAID5 which uses the parity cache and the data cache to overcome the small write problem. From the result of the simulation study we show that the cached RAID5 provides performance improvement with reasonable cache size.

  • PDF

Development of WWW cache replacement algorithm for the Internet data access (인터넷 데이터 접급 속도 향상을 위한 WWW 캐시 교환 알고리즘 개발)

  • Heo, Yeun-Jeong;Lee, Yang-Weon;Kim, Chul-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1085-1088
    • /
    • 2005
  • 인터넷 사용자들의 증가와 함께 네트워크에서 데이터 교환량이 증가하고 있다. 따라서 네트워크 트래픽과 서버의 부담으로 인하여 WWW 서버로부터 응답 시간의 하락은 많은 문제로 제기되고 있다. 본 논문에서는 WWW cache 서버를 이용하여 응답시간을 줄수 있는 방안을 제시한다. 이런 목적을 위하여 www cache 서버를 원격으로 교체할 수 있는 알고리즘을 제안한다.

  • PDF

Cache Replacement and Coherence Policies Depending on Data Significance in Mobile Computing Environments (모바일 컴퓨팅 환경에서 데이터의 중요도에 기반한 캐시 교체와 일관성 유지)

  • Kim, Sam-Geun;Kim, Hyung-Ho;Ahn, Jae-Geun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.149-159
    • /
    • 2011
  • Recently, mobile computing environments are becoming rapidly common. This trend emphasizes the necessity of accessing database systems on fixed networks from mobile platforms via wireless networks. However, it is not an appropriate way that applies the database access methods for traditional computing environments to mobile computing environments because of their essential restrictions. This paper suggests a new agent-based mobile database access model and also two functions calculating data significance scores to choose suitable data items for cache replacement and coherence policies. These functions synthetically reflect access term, access frequency and tendency, update frequency and tendency, and data item size distribution. As the result of simulation experiment, our policies outperform LRU, LIX, and SAIU policies in aspects of decrement of access latency, improvement of cache byte hit ratio, and decrease of cache byte pollution ratio.

A Cache Manager for Enhancing the Performance of Query Evaluation in Data Warehousing Environment (데이타웨어하우스 환경에서의 질의 처리 성능 향상을 위한 캐시 관리자)

  • 심준호
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.408-419
    • /
    • 2003
  • Data warehouses are usually dedicated to the processing of quires issued by decision support system(DSS). The response time of DSS queries is typically several orders of magnitude higher than the one of OLTP queries. Since DSS queries are often submitted interactively, techniques for reducing their response time are important. The caching of query results is one such technique particularly well suited to the DSS environment. In this paper, we present a cache manager for such an environment. Specifically, we define a canonical form of query. The cache manager looks up a query based on the exact query match or using a suggested query split process if the query is found is non-canonical form or in canonical form, respectively. It dynamically maintains the cache content by employing a profit function which reflects in an integrated manner the query execution cost, the size of query result, the reference rate, the maintenance cost of each result due to updates of their base tables, and the frequency of such updates. We performed the experimental evaluation and it positively shows the performance benefit of our cache manager.

Design and Implementation of an In-Memory File System Cache with Selective Compression (대용량 파일시스템을 위한 선택적 압축을 지원하는 인-메모리 캐시의 설계와 구현)

  • Choe, Hyeongwon;Seo, Euiseong
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.658-667
    • /
    • 2017
  • The demand for large-scale storage systems has continued to grow due to the emergence of multimedia, social-network, and big-data services. In order to improve the response time and reduce the load of such large-scale storage systems, DRAM-based in-memory cache systems are becoming popular. However, the high cost of DRAM severely restricts their capacity. While the method of compressing cache entries has been proposed to deal with the capacity limitation issue, compression and decompression, which are technically difficult to parallelize, induce significant processing overhead and in turn retard the response time. A selective compression scheme is proposed in this paper for in-memory file system caches that rapidly estimates the compression ratio of incoming cache entries with their Shannon entropies and compresses cache entries with low compression ratio. In addition, a description is provided of the design and implementation of an in-kernel in-memory file system cache with the proposed selective compression scheme. The evaluation showed that the proposed scheme reduced the execution time of benchmarks by approximately 18% in comparison to the conventional non-compressing in-memory cache scheme. It also provided a cache hit ratio similar to the all-compressing counterpart and reduced 7.5% of the execution time by reducing the compression overhead. In addition, it was shown that the selective compression scheme can reduce the CPU time used for compression by 28% compared to the case of the all-compressing scheme.