• 제목/요약/키워드: Cache Management

검색결과 212건 처리시간 0.027초

이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법 (A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices)

  • 박세진;박찬익
    • 대한임베디드공학회논문지
    • /
    • 제7권3호
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

CPC: A File I/O Cache Management Policy for Compute-Bound Workloads

  • Bahn, Hyokyung
    • International journal of advanced smart convergence
    • /
    • 제11권2호
    • /
    • pp.1-6
    • /
    • 2022
  • With the emergence of the new era of the 4th industrial revolution, compute-bound workloads with large memory footprint like big data processing increase dramatically. Even in such compute-bound workloads, however, we observe bulky I/Os while loading big data from storage to memory. Although file I/O cache plays a role of accelerating the performance of storage I/O, we found out that the cache hit rate in such environments is not improved even though we increase the file I/O cache capacity because of some special I/O references generated by compute-bound workloads. To cope with this situation, we propose a new file I/O cache management policy that improves the cache hit rate for compute-bound workloads significantly. Trace-driven simulations by replaying file I/O reference logs of compute-bound workloads show that the proposed cache management policy improves the cache hit rate compared to the well-acknowledged CLOCK algorithm by a large margin.

캐시 파티션을 이용한 공유 2차 캐시 누설 에너지 관리 기법 (Leakage Energy Management Techniques via Shared L2 Cache Partitioning)

  • 강희준;김현희;김지홍
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제37권1호
    • /
    • pp.43-54
    • /
    • 2010
  • 기존의 타임아웃 기반 캐시 누설 에너지 관리 기법들은 한동안 사용되지 않은 비활성화 상태의 캐시 라인의 전력 공급을 끊음으로써 누설 에너지 소모를 줄인다. 그러나, 이들 기법들은 단일 프로세서 환경에 적합하게 고안되었기 때문에, 태스크들 간의 간섭이 빈번히 발생하는 공유 2차 캐시를 사용하는 멀티프로세서 환경에서는 에너지 감소를 방해한다. 본 논문에서는 캐시 라인 비활성화 시간을 고려한 캐시 파티션 전략을 통해 캐시 간섭을 줄임으로써 멀티프로세서 환경의 공유 2차 캐시에서의 누설 에너지 감소 효과를 증가시키기 위한 기법을 제안한다. 또한, 각 태스크들의 특성을 고려하여 타임아웃을 설정하는 적응형 타임아웃 관리 기법을 통해 캐시 누설 에너지 소비를 감소시키는 기법을 제안한다. 시뮬레이션을 통한 실험 결과에서 기존의 기법과 비교하여 2-way CMP에서는 평균 73%, 4-way CMP에서는 평균 56% 정도의 누설 에너지 소비가 줄어드는 것을 확인하였다.

Energy-Efficient Last-Level Cache Management for PCM Memory Systems

  • Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권1호
    • /
    • pp.188-193
    • /
    • 2022
  • The energy efficiency of memory systems is an important task in designing future computer systems as memory capacity continues to increase to accommodate the growing big data. In this article, we present an energy-efficient last-level cache management policy for future mobile systems. The proposed policy makes use of low-power PCM (phase-change memory) as the main memory medium, and reduces the amount of data written to PCM, thereby saving memory energy consumptions. To do so, the policy keeps track of the modified cache lines within each cache block, and replaces the last-level cache block that incurs the smallest PCM writing upon cache replacement requests. Also, the policy considers the access bit of cache blocks along with the cache line modifications in order not to degrade the cache hit ratio. Simulation experiments using SPEC benchmarks show that the proposed policy reduces the power consumption of PCM memory by 22.7% on average without degrading performances.

Wireless Ad-hoc Network에서 보안 협력 캐싱 기법에 관한 연구 (A Study on Secure Cooperative Caching Technique in Wireless Ad-hoc Network)

  • 양환석
    • 디지털산업정보학회논문지
    • /
    • 제9권3호
    • /
    • pp.91-98
    • /
    • 2013
  • Node which plays the role of cache server does not exist in the wireless ad-hoc network consisting of only mobile nodes. Even if it exists, it is difficult to provide cache services due to the movement of nodes. Therefore, the cooperative cache technique is necessary in order to improve the efficiency of information access by reducing data access time and use of bandwidth in the wireless ad-hoc network. In this paper, the whole network is divided into zones which don't overlap and master node of each zone is elected. General node of each zone has ZICT and manages cache data to cooperative cache and gateway node use NZCT to manage cache information of neighbor zone. We proposed security structure which can accomplish send and receive in the only node issued id key in the elected master node in order to prepare for cache consistent attack which is vulnerability of distributed caching techniques. The performance of the proposed method in this paper could confirm the excellent performance through comparative experiments of GCC and GC techniques.

그리드 데이터베이스에서 질의 전달 최적화를 위한 캐쉬 관리 기법 (Cache Management Method for Query Forwarding Optimization in the Grid Database)

  • 신숭선;장용일;이순조;배해영
    • 한국멀티미디어학회논문지
    • /
    • 제10권1호
    • /
    • pp.13-25
    • /
    • 2007
  • 그리드 데이터베이스에서는 질의 전달 최적화를 위해 캐쉬를 사용한다. 캐쉬에 빈번히 사용되는 데이터의 메타 정보를 메타 데이터베이스에서 가져와 캐싱하며, 캐싱된 정보를 통하여 질의 전달의 비용을 감소시킨다. 기존의 캐쉬 관리 기법은 질의 전달 시 복제본의 사용빈도를 고려하지 않은 데이터의 임의의 메타 정보를 캐싱하기 때문에 사용이 불균형적인 문제가 있다. 그리고, 원본 데이터가 변경되었을 경우에 기존의 메타정보를 가진 캐쉬를 통하여 질의가 타 노드로 잘못 전달되며 이러한 과정은 여러 노드에서 반복 수행되어 네트워크 비용을 증가시킨다. 따라서 기존의 캐쉬 관리 기법은 복제본의 사용비율 불균형과 타 노드로의 잘못된 질의 전달로 인한 네트워크 비용 증가 문제의 해결이 필요하다. 본 논문에서는 질의 전달 최적화를 위한 캐쉬 관리 기법을 제안한다. 제안 기법은 캐쉬 매니저라는 관리 프로세서를 사용하여 캐쉬를 관리한다. 캐쉬 매니저는 자주 사용되는 복제본이 저장된 노드의 사용빈도를 비교하여 적게 사용된 노드의 복제본 메타 정보를 캐싱함으로써 질의 전달을 최적화한다. 또한 캐쉬 매니저를 통해 타 노드로 잘못 전달되는 질의를 줄여 질의 처리 시간을 단축하고 네트워크 비용을 줄인다. 제안 기법은 성능평가를 통해 네트워크 비용과 처리시간이 감소되어 기존의 방식에 비하여 향상된 성능을 보인다.

  • PDF

고성능 저전력 하이브리드 L2 캐시 메모리를 위한 연관사상 집합 관리 (Way-set Associative Management for Low Power Hybrid L2 Cache Memory)

  • 정보성;이정훈
    • 대한임베디드공학회논문지
    • /
    • 제13권3호
    • /
    • pp.125-131
    • /
    • 2018
  • STT-RAM is attracting as a next generation Non-volatile memory for replacing cache memory with low leakage energy, high integration and memory access performance similar to SRAM. However, there is problem of write operations as the other Non_volatile memory. Hybrid cache memory using SRAM and STT-RAM is attracting attention as a cache memory structure with lowe power consumption. Despite this, reducing the leakage energy consumption by the STT-RAM is still lacking access to the Dynamic energy. In this paper, we proposed as energy management method such as a way-selection approach for hybrid L2 cache fo SRAM and STT-RAM and memory selection method of write/read operation. According to the simulation results, the proposed hybrid cache memory reduced the average energy consumption by 40% on SPEC CPU 2006, compared with SRAM cache memory.

Reuse Information based Thrashing Resistant Cache Management Scheme

  • Sim, Gyu Yeon;Kim, Cheol Hong
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권3호
    • /
    • pp.9-16
    • /
    • 2017
  • In recent computing systems, LRU replacement policy has been widely used because it can be simply implemented and applicable to most programs. However, if the working set size of the program is bigger than the actual cache size, LRU replacement policy may occur thrashing problem. Thrashing problem means that cache blocks are consistently replaced without re-referencing in the cache. This paper proposes a new cache management scheme to solve the thrashing problem in the second-level cache. The proposed scheme measures per set reuse frequency using EAF structure to find thrashing sets. When the cache miss occurs, it tests whether the address of the missed block is stored or not. If the address of the missed block is stored, it means that the recently evicted block is re-requested, so the reuse frequency is predicted high. In this case, the corresponding counter of the set is increased. When the counter value is bigger than the threshold value, we assume that the corresponding set shows high reuse frequency. The proposed scheme assigns the set with high reuse frequency to the additional small size cache to keep the blocks in the cache for a long time. Our experimental results show that the proposed scheme improves the IPC by 3.81% on average.

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권11호
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

효율적인 버퍼 캐시 관리를 위한 동적 캐시 분할 블록교체 기법 (Dynamic Cache Partitioning Strategy for Efficient Buffer Cache Management)

  • 진재선;허의남;추현승
    • 한국시뮬레이션학회논문지
    • /
    • 제12권2호
    • /
    • pp.35-44
    • /
    • 2003
  • The effectiveness of buffer cache replacement algorithms is critical to the performance of I/O systems. In this paper, we propose the degree of inter-reference gap (DIG) based block replacement scheme that retains merits of the least recently used (LRU) such as simple implementation and good cache hit ratio (CHR) for general patterns of references, and improves CHR further. In the proposed scheme, cache blocks with low DIGs are distinguished from blocks with high DIGs and the replacement block is selected among high DIGs blocks as done in the low inter-reference recency set (LIRS) scheme. Thus, by having the effect of the partitioning the cache memory dynamically based on DIGs, CHR is improved. Trace-driven simulation is employed to verified the superiority of the DIG based scheme and shows that the performance improves up to about 175% compared to the LRU scheme and 3% compared to the LIRS scheme for the same traces.

  • PDF