• Title/Summary/Keyword: data cache prefetch

Search Result 18, Processing Time 0.023 seconds

An Active Prefetch Filtering Schemes using Exclusive Prefetch Cache (선인출 전용 캐시를 이용한 적극적 선인출 필터링 기법)

  • Chon Young-Suk;Kim Suk-il;Jeon Joong-nam
    • The KIPS Transactions:PartA
    • /
    • v.12A no.1 s.91
    • /
    • pp.41-52
    • /
    • 2005
  • Memory reference instruction caused by cache miss is the critical factor that limits the processing power of processor. Cache prefetching technique is an effective way to reduce the latency due to memory access. However, excessively aggressive prefetch leads to cache pollution and finally to cancel out the advantage of prefetch. In this study, an active prefetch filtering scheme is introduced which dynamically decides whether to commence prefetching after referring a filtering table to reduce the cache pollution due to unnecessary prefetches. For the precision filtering, an evicted address referencing scheme has been proposed where the filter directly compares the current prefetch address with previous unnecessary prefetch addresses stored in filtering table. Moreover, a small sized exclusive prefetch cache has been introduced to increase the amount of eviction of unnecessarily prefetched addresses to enhance the accuracy of dynamic filtering. The exclusive prefetch cache also prevents useful demand data from being pushed out by prefetched data, while the evicted address direct referencing scheme enables the prefetch cache to keep most of useful prefetch data within its small size. Experimental results from commonly used general and multimedia benchmarks show that the average cache miss ratio has been decreased by $13.3{\%}$ by virtue of enhanced filtering accuracy compared with conventional schemes.

The Power and Pitfalls of Data Prefetching (데이터 미리읽기의 동작과 문제점)

  • Ki, An-do
    • Electronics and Telecommunications Trends
    • /
    • v.13 no.4 s.52
    • /
    • pp.59-69
    • /
    • 1998
  • The terminology of data prefetching is introduced, which includes stride, repeat distance, stall, pending stall, prefetch degree, prefetch distance, and prefetch offset. The effectiveness of hardware data prefetching in reducing cache misses is shown by presenting a square matrix multiplication example. Thereafter the pitfalls of prefetching and possible solutions are discussed.

Design and evaluation of a fuzzy cooperative caching scheme for MANETs

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.605-619
    • /
    • 2010
  • Caching of frequently accessed data in multi-hop ad hoc environment is a technique that can improve data access performance and availability. Cooperative caching, which allows sharing and coordination of cached data among several clients, can further en-hance the potential of caching techniques. In this paper, we propose a fuzzy cooperative caching scheme in mobile ad hoc networks. The cache management of the proposed caching scheme not only uses adaptively CacheData or CachePath based on data sim-ilarity and data utility, but also uses the replacement manager based on data pro t. Also, the proposed caching scheme uses a prefetch manager. When the TTL of the cached data expires, the prefetch manager evaluates the popularity index of the data. If the popularity index is larger than a threshold, the data is prefetched. Otherwise, its space is released. The performance of the proposed scheme is evaluated analytically and is compared to that of other cooperative caching schemes.

A Study on the Prediction Accuracy Bounds of Instruction Prefetching (명령어 선인출 예측 정확도의 한계에 관한 연구)

  • Kim, Seong-Baeg;Min, Sang-Lyul;Kim, Chong-Sang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.8
    • /
    • pp.719-729
    • /
    • 2000
  • Prefetching aims at reducing memory latency by fetching, in advance, data that are likely to be requested by the processor in a near future. The effectiveness of prefetching is determined by how accurate the prediction on the needed instructions and data is. Most previous studies on prefetching were limited to proposing a particular prefetch scheme and its performance evaluation, paying little attention to theoretical aspects of prefetching. This paper focuses on the theoretical aspects of instruction prefetching. For this purpose, we propose a clairvoyant prefetch model that makes use of perfect history information. Based on this theoretical model, we analyzed upper limits on the prefetch prediction accuracies of the SPEC benchmarks. The results show that the prefetch prediction accuracy is very high when there is no cache. However, as the size of the instruction cache increases, the prefetch prediction accuracy drops drastically. For example, in the case of the spice benchmark, the prefetch prediction accuracy drops from 53% to 39% when the cache size increases from 2Kbyte to 16Kbyte (assuming 16byte block size). These results indicate that as the cache size increases, most localities are captured by the cache and that instruction prefetching based on the information extracted from the references that missed in the cache suffers from prediction inaccuracies

  • PDF

High Performance Data Cache Memory Architecture (고성능 데이터 캐시 메모리 구조)

  • Kim, Hong-Sik;Kim, Cheong-Ghil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.945-951
    • /
    • 2008
  • In this paper, a new high performance data cache scheme that improves exploitation of both the spatial and temporal locality is proposed. The proposed data cache consists of a hardware prefetch unit and two sub-caches such as a direct-mapped (DM) cache with a large block size and a fully associative buffer with a small block size. Spatial locality is exploited by fetching and storing large blocks into a direct mapped cache, and is enhanced by prefetching a neighboring block when a DM cache hit occurs. Temporal locality is exploited by storing small blocks from the DM cache in the fully associative buffer according to their activity in the DM cache when they are replaced. Experimental results on Spec2000 programs show that the proposed scheme can reduce the average miss ratio by $12.53%\sim23.62%$ and the AMAT by $14.67%\sim18.60%$ compared to the previous schemes such as direct mapped cache, 4-way set associative cache and SMI(selective mode intelligent) cache[8].

A Hardware Cache Prefetching Scheme for Multimedia Data with Intermittently Irregular Strides (단속적(斷續的) 불규칙 주소간격을 갖는 멀티미디어 데이타를 위한 하드웨어 캐시 선인출 방법)

  • Chon Young-Suk;Moon Hyun-Ju;Jeon Joongnam;Kim Sukil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.658-672
    • /
    • 2004
  • Multimedia applications are required to process the huge amount of data at high speed in real time. The memory reference instructions such as loads and stores are the main factor which limits the high speed execution of processor. To enhance the memory reference speed, cache prefetch schemes are used so as to reduce the cache miss ratio and the total execution time by previously fetching data into cache that is expected to be referenced in the future. In this study, we present an advanced data cache prefetching scheme that improves the conventional RPT (reference prediction table) based scheme. We considers the cache line size in calculation of the address stride referenced by the same instruction, and enhances the prefetching algorithm so that the effect of prefetching could be maintained even if an irregular address stride is inserted into the series of uniform strides. According to experiment results on multimedia benchmark programs, the cache miss ratio has been improved 29% in average compared to the conventional RPT scheme while the bus usage has increased relatively small amount (0.03%).

A Prefetch Algorithm for a Mobile Host using Association Rules (연관 규칙을 이용한 이동 호스트의 선반입 알고리즘)

  • 김호숙;용환승
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.163-173
    • /
    • 2004
  • Recently, location-based services are becoming very Popular in mobile environments. In this paper, we propose a new association based prefetch algorithm (called by STAP) that efficiently supports information service based on the large quantity of spatial database in mobile environments. We apply the spatial-temporal relations that are meaningful for location-based queries in mobile environments. Moreover, STAP considers user's mobility and the weight of spatial data. The relation of services is a new aspect not considered in previous cache politics. So STAP is the first prefetch algorithm considering the spatial-temporal relations and thus the cache policy begins to gain a new dimension. We evaluate the performance of STAP and prove the efficiency of STAP.

Dynamic Prefetch Filtering Schemes to enhance Utilization of Data Cache (데이타 캐시의 활용도를 높이는 동적 선인출 필터링 기법)

  • Chon, Young-Suk;Kim, Suk-Il;Jeon, Joong-Nam
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.1
    • /
    • pp.30-43
    • /
    • 2008
  • Memory reference instructions such as loads or stores are critical factors that limit the processing power of processor. The prefetching technique is an effective way to reduce the latency caused from memory access. However, excessively aggressive prefetch leads to cache pollution so as to cancel out the advantage of prefetch. In this study, four filtering schemes have been compared and evaluated which dynamically decide whether to begin prefetch after referring a filtering table to decrease cache pollution. First, A bi-states scheme has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete state scheme has been introduced to be used as a reference for the comparative study. A block address lookup scheme has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the bi-states scheme, the contents of each entry have the fields the same as the complete state scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. Experimental results from commonly used general benchmarks and multimedia programs show that average cache miss ratio have been decreased by 10.5% for the block address lookup scheme(BAL) compare to conventional dynamic filter scheme(2-bitSC).

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Dual Cache Architecture for Low Cost and High Performance

  • Lee, Jung-Hoon;Park, Gi-Ho;Kim, Shin-Dug
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.275-287
    • /
    • 2003
  • We present a high performance cache structure with a hardware prefetching mechanism that enhances exploitation of spatial and temporal locality. Temporal locality is exploited by selectively moving small blocks into the direct-mapped cache after monitoring their activity in the spatial buffer. Spatial locality is enhanced by intelligently prefetching a neighboring block when a spatial buffer hit occurs. We show that the prefetch operation is highly accurate: over 90% of all prefetches generated are for blocks that are subsequently accessed. Our results show that the system enables the cache size to be reduced by a factor of four to eight relative to a conventional direct-mapped cache while maintaining similar performance.

  • PDF