• Title/Summary/Keyword: Cache Policy

검색결과 136건 처리시간 0.024초

Efficient Cache Management Scheme in Database based on Block Classification (블록 분류에 기반한 데이타베이스의 효율적 캐쉬 관리 기법)

  • Sin, Il-Hoon;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제29권7호
    • /
    • pp.369-376
    • /
    • 2002
  • Although LRU is not adequate for database that has non-uniform reference pattern, it has been adopted in most database systems due to the absence of the proper alternative. We analyze database block reference pattern with the realistic database trace. Based on this analysis, we propose a new cache replacement policy. Trace analysis shows that extremely non-popular blocks take up about 70 % of the entire blocks. The influence of recency on blocks' re-reference likelihood is at first strong due to temporal locality, however, it rapidly decreases and eventually becomes negligible as stack distance increases. Based on this observation, RCB(Reference Characteristic Based) cache replacement policy, which we propose in this paper, classifies the entire blocks into four block groups by blocks' recency and re-reference likelihood, and operates different priority evaluation methods for each block group. RCB policy evicts non-popular blocks more quickly than the others and evaluates the priority of the block by frequency that has not been referenced for a long time. In a trace-driven simulation, RCB delivers a better performance than the existing polices(LRU, 2Q, LRU-K, LRFU). Especially compared to LRU. It reduces miss count by 5~l2.7%. Time complexity of RCB is O(1), which is the same with LRU and 2Q and superior to LRU-K(O(log$_2$N)) and LRFU(O(l) ~ O(log$_2$N)).

A Cache buffer and Read Request-aware Request Scheduling Method for NAND flash-based Solid-state Disks (캐시 버퍼와 읽기 요청을 고려한 낸드 플래시 기반 솔리드 스테이트 디스크의 요청 스케줄링 기법)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • 제50권8호
    • /
    • pp.143-150
    • /
    • 2013
  • Solid-state disks (SSDs) have been widely used by high-performance personal computers or servers due to its good characteristics and performance. The NAND flash-based SSDs, which take large portion of the whole NAND flash market, are the major type of SSDs. They usually integrate a cache buffer which is built from DRAM and uses the write-back policy for better performance. Unfortunately, the policy makes existing scheduling methods less effective at the I/F level of SSDs Therefore, in this paper, we propose a scheduling method for the I/F with consideration of the cache buffer. The proposed method considers the hit/miss status of cache buffer and gives higher priority to the read requests. As a result, the requests whose data is hit on the cache buffer can be handled in advance and the read requests which have larger effects on the whole system performance than write requests experience shorter latency. The experimental results show that the proposed scheduling method improves read latency by 26%.

Instructions and Data Prefetch Mechanism using Displacement History Buffer (변위 히스토리 버퍼를 이용한 명령어 및 데이터 프리페치 기법)

  • Jeong, Yong Su;Kim, JinHyuk;Cho, Tae Hwan;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • 제52권10호
    • /
    • pp.82-94
    • /
    • 2015
  • In this paper, we propose hardware prefetch mechanism with an efficient cache replacement policy by giving priority to the trigger block in which a spatial region and producing a spatial region by using the displacement field. It could be taken into account the sequence of the program since a history is based on the trigger block of history record, and it could be quickly prefetching the instructions or data address by adding a stored value to the trigger address and displacement field since a history is stored as a displacement value. Also, we proposed a method of replacing at random by the cache replacement policy from the low priority block when the cache area is full after giving priority to the trigger block. We analyzed using the memory simulator program gem5 and PARSEC benchmark to assess the performance of the hardware prefetcher. As a result, compared to the existing hardware prefecture to generate the spatial region using a bit vector, L1 data cache miss rate was reduced about 44.5% on average and an average of 26.1% of L1 instruction misses occur. In addition, IPC (Instruction Per Cycle) showed an improvement of about 23.7% on average.

Design and Performance Evaluation of Replication Policy For Streaming Media Cache Server (스트리밍 미디어의 캐쉬 서버를 위한 재배치 정책의 설계와 성능분석)

  • 임은지;정성인
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 한국멀티미디어학회 2003년도 추계학술발표대회(하)
    • /
    • pp.921-924
    • /
    • 2003
  • 본 논문은 인터넷 상에서 서비스되는 스트리밍 미디어를 캐쉬 서버에서 캐슁할 때 적용할 수 있는 재배치 정책에 관한 것이다. 스트리밍 미디어의 특성에 적합한 재배치 기준을 제시하고, 그에 따라 미디어 데이터를 캐슁하고 재배치하는 방법을 제안한다. 또한, 제안한 재배치 정책과 기존의 알려진 방법들에 대한 성능 비교 분석을 수행 한다.

  • PDF

Prefetching Policy based on File Acess Pattern and Cache Area (파일 접근 패턴과 캐쉬 영역을 고려한 선반입 기법)

  • Lim, Jae-Deok;Hwang-Bo, Jun-Hyeong;Koh, Kwang-Sik;Seo, Dae-Hwa
    • The KIPS Transactions:PartA
    • /
    • 제8A권4호
    • /
    • pp.447-454
    • /
    • 2001
  • Various caching and prefetching algorithms have been investigated to identify and effective method for improving the performance of I/O devices. A prefetching algorithm decreases the processing time of a system by reducing the number of disk accesses when an I/O is needed. This paper proposes an AMBA prefetching method that is an extended version of the OBA prefetching method. The AMBA prefetching method will prefetching blocks continuously as long as disk bandwidth is enough. In this method, though there were excessive data request rate, we would expect efficient prefetching. And in the AMBA prefetching method, to prevent the cache pollution, it limits the number of data blocks to be prefetched within the cache area. It can be implemented in a user-level File System based on a Linux Operating System. In particular, the proposed prefetching policy improves the system performance by about 30∼40% for large files that are accessed sequentially.

  • PDF

Analysis and Improvement of I/O Performance Degradation by Journaling in a Virtualized Environment (가상화 환경에서 저널링 기법에 의한 입출력 성능저하 분석 및 개선)

  • Kim, Sunghwan;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제16권6호
    • /
    • pp.177-181
    • /
    • 2016
  • This paper analyzes the host cache effectiveness in full virtualization, particularly associated with journaling of guests. We observe that the journal access of guests degrades cache performance significantly due to the write-once access pattern and the frequent sync operations. To remedy this problem, we design and implement a novel caching policy, called PDC (Pollution Defensive Caching), that detects the journal accesses and prevents them from entering the host cache. The proposed PDC is implemented in QEMU-KVM 2.1 on Linux 4.14 and provides 3-32% performance improvement for various file and I/O benchmarks.

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • 제10권1호
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

A Level One Cache Organization for Chip-Size Limited Single Processor (칩의 크기가 제한된 단일칩 프로세서를 위한 레벨 1 캐시구조)

  • Ju YoungKwan;Kim Sukil
    • The KIPS Transactions:PartA
    • /
    • 제12A권2호
    • /
    • pp.127-136
    • /
    • 2005
  • This paper measured a proper ratio of the size of demand fetch cache $L_1$ to that of prefetch cache $L_P$ by imulation when the size of $L_1$ and $L_P$ are constant which organize space-limited level 1 cache of a single microprocessor chip. The analysis of our experiment showed that in the condition of the sum of the size of $L_1$ and $L_P$ are 16 KB, the level 1 cache organization by constituting $L_P$ with 4 KB and employing OBL and FIFO as a prefetch technique and a cache replacement policy respectively resulted in the best performance. Also, this analysis showed that in the condition of the sum of the size of $L_1$ and $L_P$ are over 32 KB, employing dynamic filtering as prefetch technique of $L_P$ are more advantageous and splitting level 1 cache by constituting $L_1$ with 28 KB and $L_P$ with 4 KB in the case of 32 KB of space are available, by constituting $L_1$ with 48 KB and $L_P$ with 16 KB in the case of 64 KB elicited the best performance.

A Scalable Cache Group Configuration Policy using Role-Partitioned Cache (캐시의 역할 구분을 이용한 확장성이 있는 캐시 그룹 구성 정칙)

  • 현진일;장태무
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 한국정보과학회 2001년도 가을 학술발표논문집 Vol.28 No.2 (3)
    • /
    • pp.28-30
    • /
    • 2001
  • 동적 캐시 그룹 구성 정책으로 중첩된 그룹 캐시 구조가 제시되면서 중첩 캐시의 운영 방안에 대한 연구가 필요하게 되었다. 그 기반 연구로 계층 캐시의 효율적 캐시 분배 정책을 그대로 살리되 중첩 그룹 캐시를 이용하는 방안으로 그룹 내 멀티캐스팅 페이지 분배 정책이 제시되고 있다. 그러나 그룹내의 모든 캐시에 해당 페이지를 분배하는 방안은 불필요한 저장 공간 낭비를 가져올 뿐 아니라 그룹 내 통신량 증가라는 문제점을 내포하고 있다. 따라서 본 논문에서는 그룹 내 캐시의 역할을 두 개의 기능으로 나누어 보다 효율적 페이지 분배와 그룹 유지 운영 방안을 제시하려 한다. 또한 중첩 그룹 캐시에서 제기되는 요청 진행 방향 문제에 대한 해결 방안으로 그를 아이디를 활용한 진행 방향 테이블 운영 방안을 제시하려 한다.

  • PDF

A Dual Mode Buffer Cache Management Policy for a Continuous Media Server (연속 미디어 서버를 위한 이중 모드 버퍼 캐쉬 관리 기법)

  • Seo, Won-Il;Park, Yong-Woon;Chung, Ki-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • 제6권12호
    • /
    • pp.3642-3651
    • /
    • 1999
  • In this paper, we propose a new caching scheme for continuous media data where the buffer allocation unit is divided into two modes : interval and object. All of objects' access patterns are monitored and based on the results of monitoring, a request for an object is decided to cache its data with either interval mode or object mode. The results of our simulation show that our proposed caching scheme is better than the existing caching algorithms such as interval caching where the access patterns of the objects are changed with time.

  • PDF