• 제목/요약/키워드: cache

검색결과 1,209건 처리시간 0.023초

Preventing Fast Wear-out of Flash Cache with An Admission Control Policy

  • Lee, Eunji;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제15권5호
    • /
    • pp.546-553
    • /
    • 2015
  • Recently, flash cache is widely adopted as the performance accelerator of legacy storage systems. Unlike other cache media, flash cache should be carefully managed as it has peculiar characteristics such as long write latency and limited P/E cycles. In particular, we make two prominent observations that can be utilized in managing flash cache. First, a serious worn-out problem happens when the working-set of a system is beyond the capacity of flash cache due to excessively frequent cache replacement. Second, more than 50% of data has no hit in flash cache as it is a second level cache. Based on these observations, we propose a cache admission control policy that does not cache data when it is first accessed, and inserts it into the cache only after its second access occurs within a certain time window. This allows the filtering of data disruptive to flash cache in terms of endurance and performance. With this policy, we prolong the lifetime of flash cache 2.3 times without any performance degradations.

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제16권1호
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권9호
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

계층적 메모리 구조의 효과를 극대화하는 캐시 제어기 (A Cache Controller to Maximize Effectiveness of Hierarchical Memory Architecture)

  • 어봉용;주영관;전중남;김석일
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제32권11_12호
    • /
    • pp.608-616
    • /
    • 2005
  • 이 논문에서는 계층적 캐시 구조에서 기존의 레벨 2 캐시 미스 시에만 선인출 하도록 되어있는 구조를 레벨 1 캐시 미스 시에도 선인출 하도록 하는 캐시구조를 제안하였다. 즉, 레벨 1 캐시 미스가 발생하면 레벨 2 캐시로부터 요구블록과 선인출 블록을 선택하여 레벨 1 캐시와 선인출 캐시에 각각 적재한다. 11개의 벤치마크 프로그램에 대한 실험결과, 레벨 1 캐시 선인출기와 레벨 2 캐시 선인출기로 구성한 계층적 캐시구조가 레벨 2 캐시 선인출기만 채용한 기존의 캐시구조에 비하여 최대 $19\%$의 성능향상을 얻을 수 있었다.

이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법 (A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices)

  • 박세진;박찬익
    • 대한임베디드공학회논문지
    • /
    • 제7권3호
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

Design and analytical evaluation of a fuzzy proxy caching for wireless internet

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • 제20권6호
    • /
    • pp.1177-1190
    • /
    • 2009
  • In this paper, we propose a fuzzy proxy cache scheme for caching web documents in mobile base stations. In this scheme, a mobile cache model is used to facilitate data caching and data replication. Using the proposed cache scheme, the individual proxy in the base station makes cache decisions based solely on its local knowledge of the global cache state so that the entire wireless proxy cache system can be effectively managed without centralized control. To improve the performance of proxy caching, the proposed cache scheme predicts the direction of movement of mobile hosts, and uses various cache methods for neighboring proxy servers according to the fuzzy-logic-based control rules based on the membership degree of the mobile host. The performance of our cache scheme is evaluated analytically in terms of average response delay and average energy cost, and is compared with that of other mobile cache schemes.

  • PDF

비휘발성 메모리 시스템을 위한 저전력 연쇄 캐시 구조 및 최적화된 캐시 교체 정책에 대한 연구 (A Study on Design and Cache Replacement Policy for Cascaded Cache Based on Non-Volatile Memories)

  • 최주희
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.106-111
    • /
    • 2023
  • The importance of load-to-use latency has been highlighted as state-of-the-art computing cores adopt deep pipelines and high clock frequencies. The cascaded cache was recently proposed to reduce the access cycle of the L1 cache by utilizing differences in latencies among banks of the cache structure. However, this study assumes the cache is comprised of SRAM, making it unsuitable for direct application to non-volatile memory-based systems. This paper proposes a novel mechanism and structure for lowering dynamic energy consumption. It inserts monitoring logic to keep track of swap operations and write counts. If the ratio of swap operations to total write counts surpasses a set threshold, the cache controller skips the swap of cache blocks, which leads to reducing write operations. To validate this approach, experiments are conducted on the non-volatile memory-based cascaded cache. The results show a reduction in write operations by an average of 16.7% with a negligible increase in latencies.

  • PDF

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • 제11권2호
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.

A Novel Method of Improving Cache Hit-rate in Hadoop MapReduce using SSD Cache

  • Kim, Jong-Chan;An, Jae-Hoon;Kim, Young-Hwan;Jeon, Ki-Man
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권8호
    • /
    • pp.1-6
    • /
    • 2015
  • The MapReduce Program of Hadoop Distributed File System operates on any unspecified nodes due to distributed-parallel process and block replicate for data stability. Since it is difficult to guarantee the cache locality when a Solid State Drive is used as a cache in hadoop, cache hit-rate is decreased. In this paper, we suggest a method to improve cache hit rate by pre-loading the input data of the MapReduce onto the SSD cache. To perform this method, we estimated the blocks that are used on each node by using capacity scheduler and block metadata. Eventually we could increase the performance of SSD cache by loading the blocks onto SSD cache before the Map Task run.