• 제목/요약/키워드: data cache

검색결과 487건 처리시간 0.028초

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권9호
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제16권1호
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

Preventing Fast Wear-out of Flash Cache with An Admission Control Policy

  • Lee, Eunji;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제15권5호
    • /
    • pp.546-553
    • /
    • 2015
  • Recently, flash cache is widely adopted as the performance accelerator of legacy storage systems. Unlike other cache media, flash cache should be carefully managed as it has peculiar characteristics such as long write latency and limited P/E cycles. In particular, we make two prominent observations that can be utilized in managing flash cache. First, a serious worn-out problem happens when the working-set of a system is beyond the capacity of flash cache due to excessively frequent cache replacement. Second, more than 50% of data has no hit in flash cache as it is a second level cache. Based on these observations, we propose a cache admission control policy that does not cache data when it is first accessed, and inserts it into the cache only after its second access occurs within a certain time window. This allows the filtering of data disruptive to flash cache in terms of endurance and performance. With this policy, we prolong the lifetime of flash cache 2.3 times without any performance degradations.

Design and analytical evaluation of a fuzzy proxy caching for wireless internet

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • 제20권6호
    • /
    • pp.1177-1190
    • /
    • 2009
  • In this paper, we propose a fuzzy proxy cache scheme for caching web documents in mobile base stations. In this scheme, a mobile cache model is used to facilitate data caching and data replication. Using the proposed cache scheme, the individual proxy in the base station makes cache decisions based solely on its local knowledge of the global cache state so that the entire wireless proxy cache system can be effectively managed without centralized control. To improve the performance of proxy caching, the proposed cache scheme predicts the direction of movement of mobile hosts, and uses various cache methods for neighboring proxy servers according to the fuzzy-logic-based control rules based on the membership degree of the mobile host. The performance of our cache scheme is evaluated analytically in terms of average response delay and average energy cost, and is compared with that of other mobile cache schemes.

  • PDF

내장형 시스템을 위한 에너지-성능 측면에서 효율적인 2-레벨 데이터 캐쉬 구조의 설계 (Energy-Performance Efficient 2-Level Data Cache Architecture for Embedded System)

  • 이종민;김순태
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제37권5호
    • /
    • pp.292-303
    • /
    • 2010
  • 온칩(on-chip) 캐쉬는 외부 메모리로의 접근을 감소시키며 빈번하게 접근되기 때문에 내장형 시스템의 성능과 에너지 소비 측면에서 중요한 역할을 한다. 본 논문에서는 내장형 시스템에 맞추어 설계된 2-레벨 데이터 캐쉬 메모리 구조를 제안하고자 한다. 레벨1(L1) 캐쉬의 구성으로 작은 크기, 직접시장(direct-mapped) 그리고 바로쓰기(write-through)를 채용한다. 대조적으로 레벨2(L2) 캐쉬는 보통의 캐쉬 크기와 집합연관(set-associativity) 그리고 나중쓰기(write-back) 정책을 채용한다. 결과적으로 L1 캐쉬는 빠른 접근 시간을 가지며 (한 사이클 이내) L2 캐쉬는 전체 캐쉬의 미스율(global miss rate)을 낮추는데 효과적이다. 작은 크기의 L1 데이터 캐쉬로 인한 증가된 캐쉬 미스율(miss rate)을 줄이기 위해 ECP(Early Cache hit Predictor)기법을 제안하였다. 제안된 ECP기법은 L1 캐쉬 히트 예측을 통해서 요청된 데이터가 L1 캐쉬에 있는지 예측할 수 있으며 추가적으로, ALU를 필요로 하지 않고 빠르게 유효주소(effective address)계산을 할 수 있다. 또한, 두 캐쉬 계층간 바로쓰기(write-through) 정책에서 오는 빈번한 L2 캐쉬 접근으로 인한 에너지 소비를 줄이기 위해 지정웨이 쓰기(one-way write) 기법을 제안하였다. 제안된 지정웨이 쓰기 기법을 이용하면 바로쓰기 정책으로 인한 L1 캐쉬에서 L2 캐쉬로의 쓰기 접근시 태그(tag) 비교 과정을 거치지 않고 하나의 지정된 웨이를 바로 접근할 수 있다. 사이클 단위 정확도의 시뮬레이터와 내장형 벤치마크를 이용한 실험 결과 본 논문에서 제안한 2-레벨 데이터 캐쉬 메모리 구조는 평균적으로 3.6%의 성능향상과 50%의 데이터 캐쉬 에너지 소비를 감소 시켰다.

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • 제11권2호
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.

Bounding Worst-Case Data Cache Performance by Using Stack Distance

  • Liu, Yu;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • 제3권4호
    • /
    • pp.195-215
    • /
    • 2009
  • Worst-case execution time (WCET) analysis is critical for hard real-time systems to ensure that different tasks can meet their respective deadlines. While significant progress has been made for WCET analysis of instruction caches, the data cache timing analysis, especially for set-associative data caches, is rather limited. This paper proposes an approach to safely and tightly bounding data cache performance by computing the worst-case stack distance of data cache accesses. Our approach can not only be applied to direct-mapped caches, but also be used for set-associative or even fully-associative caches without increasing the complexity of analysis. Moreover, the proposed approach can statically categorize worst-case data cache misses into cold, conflict, and capacity misses, which can provide useful insights for designers to enhance the worst-case data cache performance. Our evaluation shows that the proposed data cache timing analysis technique can safely and accurately estimate the worst-case data cache performance, and the overestimation as compared to the observed worst-case data cache misses is within 1% on average.

다양한 cache block크기에 의한 시스템의 성능 변화 (Impacts of multiple cache block sizes on system performance)

  • 이성환;김준성
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅲ
    • /
    • pp.1347-1350
    • /
    • 2003
  • 본 논문에서는 instruction과 data cache로 나누어지는 L1 cache를 가진 시스템에서 instruction과 data cache 각각의 block 크기 변화가 전체 시스템의 성능에 미치는 영향을 고찰하였다. 이를 위하여 SPEC CPU 벤치마크 프로그램을 입력으로 하는 SimpleScalar를 이용한 시뮬레이션을 수행하였다. 본 연구를 통해서, instruction과 data 각각의 특성에 맞는 cache block 크기를 사용하는 것이 일률적인 cache block 크기를 사용하는 것에 비하여 전체 시스템의 성능을 더욱 향상시켜 준다는 것을 보여준다.

  • PDF

WWW Cache Replacement Algorithm Based on the Network-distance

  • Kamizato, Masaru;Nagata, Tomokazu;Taniguchi, Yuji;Tamaki, Shiro
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.238-241
    • /
    • 2002
  • With the popularity of utilization of the Internet among people, the amount of data in the network rapidly increased. So that, the fall of response time from WWW server, which is caused by the network traffic and the burden on m server, has become more of an issue. This problem is encouraged the rearch by redundancy of requesting the same pages by many people, even though they browse the same the ones. To reduce these redundancy, WWW cache server is used commonly in order to store m page data and reuse them. However, the technical uses of WWW cache that different from CPU and Disk cache, is known for its difficulty of improving the cache hit rate. Consecuently, it is difficult to choose effective WWW data to be stored from all data flowing through the WWW cache server. On the other hand, there are room for improvement in commonly used cache replacement algorithms by WWW cache server. In our study, we try to realize a WWW cache server that stresses on the improvement of the stresses of response time. To this end, we propose the new cache replacement algorithm by focusing on the utilizable information of network distance from the WWW cache server to WWW server that possessing the page data of the user requesting.

  • PDF