• Title/Summary/Keyword: Cache Policy

Search Result 136, Processing Time 0.028 seconds

A Design of Double Cache Policy for File System Based on NAND Flash Memory (NAND 플래시 메모리 기반 파일시스템을 위한 더블 캐시 정책 설계)

  • Park, Myung-Kyu;Kim, Sung-Jo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06b
    • /
    • pp.366-370
    • /
    • 2008
  • NAND 플래시 메모리는 특성상 쓰기 횟수가 제한적이라는 단점을 가지고 있어 쓰기 연산이 빈번히 발생하게 되면 NAND 플래시 메모리의 수명이 줄어든다. 이러한 문제점을 해결하기 위해 NAND 플래시 메모리의 특성을 고려한 지연 쓰기 기법이 연구되고 있다. 하지만 지연 쓰기를 하기 때문에 쓰기 횟수는 줄어들지만 캐시 적중률이 낮아진다. 이러한 문제해결을 위해 본 논문에서는 NAND 플래시 메모리 기반 파일 시스템을 위한 더블 캐시 정책을 제안한다. 더블 캐시는 실질적인 캐시인 Real Cache와 요구 페이지의 패턴을 관찰하기 위한 Ghost Cache로 구성된다. 이 정책은 Real Cache에서의 지연 쓰기를 하지 않고, Ghost Cache 공간에서 dirty페이지와 clean페이지를 활용하여 효율적인 지연 쓰기가 가능하도록 설계함으로써 쓰기 횟수를 줄이고, 적중률을 높인다.

  • PDF

Cooperative Content Caching and Distribution in Dense Networks

  • Kabir, Asif
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5323-5343
    • /
    • 2018
  • Mobile applications and social networks tend to enhance the need for high-quality content access. To address the rapid growing demand for data services in mobile networks, it is necessary to develop efficient content caching and distribution techniques, aiming at significantly reduction of redundant content transmission and thus improve content delivery efficiency. In this article, we develop optimal cooperative content cache and distribution policy, where a geographical cluster model is designed for content retrieval across the collaborative small cell base stations (SBSs) and replacement of cache framework. Furthermore, we divide the SBS storage space into two equal parts: the first is local, the other is global content cache. We propose an algorithm to minimize the content caching delay, transmission cost and backhaul bottleneck at the edge of networks. Simulation results indicates that the proposed neighbor SBSs cooperative caching scheme brings a substantial improvement regarding content availability and cache storage capacity at the edge of networks in comparison with the current conventional cache placement approaches.

Efficient Cache Architecture for Transactional Memory (트랜잭셔널 메모리를 위한 효율적인 캐시 구조)

  • Choi, Dong-Min;Kim, Seung-Hun;Ro, Won-Woo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.4
    • /
    • pp.1-8
    • /
    • 2011
  • Traditional transactional memory systems are no longer able to guarantee the performance of diverse applications with overflowed transactions since there is the drawback that tracking the data for logging is difficult. Especially, this mechanism has a disadvantage of increasing communication delay for sustaining the state which is required to detect the conflict on the overflowed transactions from the first level cache in the transactional memory systems. To address this point, we have focused on the cache architecture of the systems to reduce the overhead caused by overflows and cache misses. In this paper, we present Supportive Cache which reduces additional overhead during transactions. Supportive Cache performs a parallel look-up with L1 private cache and uses the same replacement policy as L1 private cache. We evaluate the performance of the proposed design by comparing LogTM-SE with and without Supportive Cache. The simulation results show that our system improves the performance by 37% on average, compared to the original LogTM-SE which uses the same hardware resource.

The Effects of Cache Memory on the System Bus Traffic (캐쉬 메모리가 버스 트래픽에 끼치는 영향)

  • 조용훈;김정선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.224-240
    • /
    • 1996
  • It is common sense for at least one or more levels of cache memory to be used in these day's computer systems. In this paper, the impact of the internal cache memory organization on the performance of the computer is investigated by using a simulator program, which is wirtten by authors and run on SUN SPARC workstation, with several real execution, with several real execution trace files. 280 cache organizations have been simulated using n-way set associative mapping and LRU(Least Recently Used) replacement algorithm with write allocation policy. As a result, 16-way setassociative cache is the best configuration, and when we select 256KB cache memory and 64 byte line size, the bus traffic ratio was decreased compared to that of the noncache system so that a single bus could support almost 7 processors without any delay and degradationof high ratio(hit ratio was 99.21%). The smaller the line size we choose, the little lower hit ratio we can get, but the more processors can be supported by a single bus(maximum 18 processors). Therefore, using a proper cache memory organization can make a single bus structure be able to support multiple processors without any performance degradation.

  • PDF

IT-based Technology An Efficient Global Buffer Management ,algorithm for SAN Environments (SAN 환경을 위한 효율적인 전역버퍼 관리 알고리즘)

  • 이석재;박새미;송석일;유재수;이장선
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.71-80
    • /
    • 2004
  • In distributed file-systems, cooperative caching algorithm which owns the data cached at each node jointly is used to reduce an expense of disk access. Cooperative caching algorithm is the method that increases a cache hit-ratio and decrease a disk access as it holds the cache information of distributed systems in common and makes cache larger virtually. Recently, several cooperative caching algorithms decrease the message costs by using approximate information of the cache and increase the cache hit-ratio by using local and global cache fields dynamically. And they have an advantage that increases the whole field hit-ratio by sending a replaced buffer to the idle node on buffers replacement in order to maintain the replaced cache in the cache field. However the wrong approximate information deteriorates the performance, the consistency maintenance goes to great expense to exchange messages and the cost that manages Age-information of each node to choose the idle node increases. In this thesis, we propose a cooperative cache algorithm that maintains correct cache information, minimizes the maintenance cost for consistency and the management cost for buffer Age-information. Also, we show the superiority of our algorithm through the performance evaluation.

  • PDF

Filter Cache Predictor Using Mode Selection Bit (모드 선택 비트를 사용한 필터 캐시 예측기)

  • Kwak, Jong-Wook
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.1-13
    • /
    • 2009
  • Filter cache has been introduced as one solution of reducing cache power consumption. More than 50% of the power reduction results from the filter cache, whereas more than 20% of the performance is compromised. To minimize the performance degradation of the filter cache, the predictive filter cache has been proposed. In this paper, we review the previous filter cache predictors and analyze the problems of the solutions. As a result, we found main problems that cause prediction misses in previous filter cache schemes and, to resolve the problems, this paper proposes a new prediction policy. In our scheme, some reference bit entries, called MSBs, are inserted into filter cache and BTB, to adaptively control the filter cache access. In simulation parts, we use a modified SimpleScalar simulator with MiBench benchmark programs to verify the proposed filter cache. The simulation result shows in average 5% performance improvement, compared to previous ones.

Designing a low-power L1 cache system using aggressive data of frequent reference patterns

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.9-16
    • /
    • 2022
  • Today, with the advent of the 4th industrial revolution, IoT (Internet of Things) systems are advancing rapidly. For this reason, a various application with high-performance and large-capacity are emerging. Therefore, there is a need for low-power and high-performance memory for computing systems with these applications. In this paper, we propose an effective structure for the L1 cache memory, which consumes the most energy in the computing system. The proposed cache system is largely composed of two parts, the L1 main cache and the buffer cache. The main cache is 2 banks, and each bank consists of a 2-way set association. When the L1 cache hits, the data is copied into buffer cache according to the proposed algorithm. According to simulation, the proposed L1 cache system improved the performance of energy delay products by about 65% compared to the existing 4-way set associative cache memory.

Design of an Asynchronous Data Cache with FIFO Buffer for Write Back Mode (Write Back 모드용 FIFO 버퍼 기능을 갖는 비동기식 데이터 캐시)

  • Park, Jong-Min;Kim, Seok-Man;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.72-79
    • /
    • 2010
  • In this paper, we propose the data cache architecture with a write buffer for a 32bit asynchronous embedded processor. The data cache consists of CAM and data memory. It accelerates data up lood cycle between the processor and the main memory that improves processor performance. The proposed data cache has 8 KB cache memory. The cache uses the 4-way set associative mapping with line size of 4 words (16 bytes) and pseudo LRU replacement algorithm for data replacement in the memory. Dirty register and write buffer is used for write policy of the cache. The designed data cache is synthesized to a gate level design using $0.13-{\mu}m$ process. Its average hit rate is 94%. And the system performance has been improved by 46.53%. The proposed data cache with write buffer is very suitable for a 32-bit asynchronous processor.

Considering Data Reference Pattern in Buffer Cache for Continuous Media File System (연속미디어 파일 시스템의 버퍼 캐시에서 데이터 참조 유형의 고려)

  • Cho, Kyung-Woon;Ryu, Yeon-Seung;Koh, Kern
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.163-170
    • /
    • 2002
  • Previous buffer cache schemes for continuous media file system only exploited the sequentiality of continuous media accesses and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a novel buffer cache scheme for continuous media file system that sequential and looping references exist together. Proposed scheme increases the cache hit ratio by detecting reference pattern of files and appling an appropriate replacement policy to each file.

Performance Improvement and Power Consumption Reduction of an Embedded RISC Core

  • Jung, Hong-Kyun;Jin, Xianzhe;Ryoo, Kwang-Ki
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.1
    • /
    • pp.78-84
    • /
    • 2012
  • This paper presents a branch prediction algorithm and a 4-way set-associative cache for performance improvement of an embedded RISC core and a clock-gating algorithm with observability don’t care (ODC) operation to reduce the power consumption of the core. The branch prediction algorithm has a structure using a branch target buffer (BTB) and 4-way set associative cache that has a lower miss rate than a direct-mapped cache. Pseudo-least recently used (LRU) policy is used for reducing the number of LRU bits. The clock-gating algorithm reduces dynamic power consumption. As a result of estimation of the performance and the dynamic power, the performance of the OpenRISC core applied to the proposed architecture is improved about 29% and the dynamic power of the core with the Chartered 0.18 ${\mu}m$ technology library is reduced by 16%.