• Title/Summary/Keyword: Hybrid Cache Architecture

Search Result 8, Processing Time 0.025 seconds

Low Power Scheme Using Bypassing Technique for Hybrid Cache Architecture

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.10-15
    • /
    • 2021
  • Cache bypassing schemes have been studied to remove unnecessary updating the data in cache blocks. Among them, a statistics-based cache bypassing method for asymmetric-access caches is one of the most efficient approach for non-voliatile memories and shows the lowest cache access latency. However, it is proposed under the condition of the normal cache system, so further study is required for the hybrid cache architecture. This paper proposes a novel cache bypassing scheme, called hybrid bypassing block selector. In the proposal, the new model is established considering the SRAM region and the non-volatile memory region separately. Based on the model, hybrid bypassing decision block is implemented. Experiments show that the hybrid bypassing decision block saves overall energy consumption by 21.5%.

A Study on Direct Cache-to-Cache Transfer for Hybrid Cache Architecture to Reduce Write Operations (쓰기 횟수 감소를 위한 하이브리드 캐시 구조에서의 캐시간 직접 전송 기법에 대한 연구)

  • Juhee Choi
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.65-70
    • /
    • 2024
  • Direct cache-to-cache transfer has been studied to reduce the latency and bandwidth consumption related to the shared data in multiprocessor system. Even though these studies lead to meaningful results, they assume that caches consist of SRAM. For example, if the system employs the non-volatile memory, the one of the most important parts to consider is to decrease the number of write operations. This paper proposes a hybrid write avoidance cache coherence protocol that considers the hybrid cache architecture. A new state is added to finely control what is stored in the non-volatile memory area, and experimental results showed that the number of writes was reduced by about 36% compared to the existing schemes.

  • PDF

A New Hybrid Architecture for Cooperative Web Caching

  • Baek, Jin-Suk;Kaur, Gurpreet;Yang, Jung-Hoon
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • An effective solution to the problems caused by the explosive growth of World Wide Web is a web caching that employing an additional server, called proxy cache, between the clients and main server for caching the popular web objects near the clients. However, a single proxy cache can easily become the bottleneck. Deploying groups of cooperative caches provides scalability and robustness by eliminating the limitations caused by a single proxy cache. Two common architectures to implement the cooperative caching are hierarchical and distributed caching systems. Unfortunately, both architectures suffer from performance limitations. We propose an efficient hybrid caching architecture eliminating these limitations by using both the hierarchical and same level caches. Our performance evaluation with our investigated simulator shows that the proposed architecture offers the best of both existing architectures in terms of cache hit rate, the number of query messages from clients, and response time.

  • PDF

An Efficient Variable Rearrangement Technique for STT-RAM Based Hybrid Caches

  • Youn, Jonghee M.;Cho, Doosan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.67-78
    • /
    • 2016
  • The emerging Spin-Transfer Torque RAM (STT-RAM) is a promising component that can be used to improve the efficiency as a result of its high storage density and low leakage power. However, the state-of-the-art STT-RAM is not ready to replace SRAM technology due to the negative effect of its write operations. The write operations require longer latency and more power than the same operations in SRAM. Therefore, a hybrid cache with SRAM and STT-RAM technologies is proposed to obtain the benefits of STT-RAM while minimizing its negative effects by using SRAM. To efficiently use of the hybrid cache, it is important to place write intensive data onto the cache. Such data should be placed on SRAM to minimize the negative effect. Thus, we propose a technique that optimizes placement of data in main memory. It drives the proper combination of advantages and disadvantages for SRAM and STT-RAM in the hybrid cache. As a result of the proposed technique, write intensive data are loaded to SRAM and read intensive data are loaded to STT-RAM. In addition, our technique also optimizes temporal locality to minimize conflict misses. Therefore, it improves performance and energy consumption of the hybrid cache architecture in a certain range.

An Efficient Implementation of the Supernodal Multifrontal Method (초마디 멀티프런탈 방법의 효율적인 구현)

  • 박찬규;박순달
    • Korean Management Science Review
    • /
    • v.19 no.2
    • /
    • pp.155-168
    • /
    • 2002
  • In this paper, some efficient implementation techniques for the multifrontal method, which can be used to compute the Cholesky factor of a symmetric positive definite matrix, are presented. In order to use the cache effect in the cache-based computer architecture, a hybrid method for factorizing a frontal matrix is considered. This hybrid method uses the column Cholesky method and the submatrix Cholesky method alternatively. Experiments show that the hybrid method speeds up the performance of the supernodal multifrontal method by 5%~10%, and it is superior to the Cholesky method in some problems with dense columns or large frontal matrices.

Hybrid Main Memory based Buffer Cache Scheme by Using Characteristics of Mobile Applications (모바일 애플리케이션의 특성을 이용한 하이브리드 메모리 기반 버퍼 캐시 정책)

  • Oh, Chansoo;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1314-1321
    • /
    • 2015
  • Mobile devices employ buffer cache mechanisms, just as in computer systems such as desktops or servers, to mitigate the performance gap between main memory and secondary storage. However, DRAM has a problem in that it accelerates battery consumption by performing refresh operations periodically to maintain the stored data. In this paper, we propose a novel buffer cache scheme to increase the battery lifecycle in mobile devices based on a hybrid main memory architecture consisting of DRAM and non-volatile PCM. We also suggest a new buffer cache policy that allocates buffers based on process states to optimize the performance and endurance of PCM. In particular, our algorithm allocates each page to the appropriate position corresponding to the state of the application that owns the page, and tries to ensure a rapid response of foreground applications even with a small amount of DRAM memory. The experimental results indicate that the proposed scheme reduces the elapsed time of foreground applications by 58% on average and power consumption by 23% on average without negatively impacting the performance of background applications.

CACHE:Context-aware Clustering Hierarchy and Energy efficient for MANET (CACHE:상황인식 기반의 계층적 클러스터링 알고리즘에 관한 연구)

  • Mun, Chang-min;Lee, Kang-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.571-573
    • /
    • 2009
  • Mobile Ad-hoc Network(MANET) needs efficient node management because the wireless network has energy constraints. Mobility of MANET would require the topology change frequently compared with a static network. To improve the routing protocol in MANET, energy efficient routing protocol would be required as well as considering the mobility would be needed. Previously proposed a hybrid routing CACH prolong the network lifetime and decrease latency. However the algorithm has a problem when node density is increase. In this paper, we propose a new method that the CACHE(Context-aware Clustering Hierarchy and Energy efficient) algorithm. The proposed analysis could not only help in defining the optimum depth of hierarchy architecture CACH utilize, but also improve the problem about node density.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.