• Title/Summary/Keyword: 캐시 메모리

Search Result 242, Processing Time 0.028 seconds

Delayed Write Scheme for The Flash Memory based Embedded Database Systems (플래시 메모리 기반 임베디드 데이터베이스 시스템을 위한 지연쓰기 기법)

  • Yun, Seung-Hee;Song, Ha-Joo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10c
    • /
    • pp.287-290
    • /
    • 2006
  • 플래시 메모리는 동작 특성상 메모리 영역에 대한 덮어쓰기(overwrite)가 불가능하고 메모리 쓰기를 위해서는 삭제(erase) 연산을 반드시 먼저 수행해야 한다. 삭제 연산은 읽기 연산에 비해 많은 시간이 소요되므로 될수록 줄이는 것이 플래시 메모리의 수행 성능 향상에 유리하다. 본 논문에서는 플래시 메모리에 대한 삭제 횟수를 줄이기 위해 데이터베이스 페이지에 대한 쓰기 연산을 지연하는 지연쓰기 기법을 제안한다. 이 기법은 페이지에 대한 갱신이 일어날 때 페이지캐시 내의 해당 페이지에 대해서는 갱신을 수행하되 그것을 유발한 레코드 연산(레코드 삽입, 갱신, 삭제)은 별도의 지연쓰기 큐에 기록한다. 그리고 레코드 연산이 지연쓰기 큐에 저장되어 있는 동안에는 해당 페이지에 대한 갱신은 보류한다. 만약 해당 페이지를 다시 읽어야할 필요가 있을 때에는 지연 쓰기 큐에 저장된 갱신 정보와 병합하여 갱신된 페이지를 페이지 캐시에 적재한다. 이는 갱신되는 페이지의 개수와 단일 페이지에 대한 갱신 횟수를 감소시키는 효과를 가져온다. 따라서 플래시 메모리의 삭제 및 쓰기 연산을 감소시켜 데이터베이스 시스템의 수행성능을 향상시키게 된다.

  • PDF

A Cache buffer and Read Request-aware Request Scheduling Method for NAND flash-based Solid-state Disks (캐시 버퍼와 읽기 요청을 고려한 낸드 플래시 기반 솔리드 스테이트 디스크의 요청 스케줄링 기법)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.143-150
    • /
    • 2013
  • Solid-state disks (SSDs) have been widely used by high-performance personal computers or servers due to its good characteristics and performance. The NAND flash-based SSDs, which take large portion of the whole NAND flash market, are the major type of SSDs. They usually integrate a cache buffer which is built from DRAM and uses the write-back policy for better performance. Unfortunately, the policy makes existing scheduling methods less effective at the I/F level of SSDs Therefore, in this paper, we propose a scheduling method for the I/F with consideration of the cache buffer. The proposed method considers the hit/miss status of cache buffer and gives higher priority to the read requests. As a result, the requests whose data is hit on the cache buffer can be handled in advance and the read requests which have larger effects on the whole system performance than write requests experience shorter latency. The experimental results show that the proposed scheduling method improves read latency by 26%.

Design of an Asynchronous Instruction Cache based on a Mixed Delay Model (혼합 지연 모델에 기반한 비동기 명령어 캐시 설계)

  • Jeon, Kwang-Bae;Kim, Seok-Man;Lee, Je-Hoon;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.64-71
    • /
    • 2010
  • Recently, to achieve high performance of the processor, the cache is splits physically into two parts, one for instruction and one for data. This paper proposes an architecture of asynchronous instruction cache based on mixed-delay model that are DI(delay-insensitive) model for cache hit and Bundled delay model for cache miss. We synthesized the instruction cache at gate-level and constructed a test platform with 32-bit embedded processor EISC to evaluate performance. The cache communicates with the main memory and CPU using 4-phase hand-shake protocol. It has a 8-KB, 4-way set associative memory that employs Pseudo-LRU replacement algorithm. As the results, the designed cache shows 99% cache hit ratio and reduced latency to 68% tested on the platform with MI bench mark programs.

Design for Effective Web Caching Hierarchy (효과적인 웹 캐싱 계층을 위한 설계)

  • 강만모;유대승;구자록
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10c
    • /
    • pp.513-515
    • /
    • 1999
  • 기하급수적으로 증가하는 인터넷 트래픽(traffic)의 대처방안으로 캐시 일관성 유지, 캐싱 알고리즘 등 웹 캐싱에 대한 연구가 끊임없이 진행되고 있다. 본 논문에서는 인터넷 트래픽의 양을 줄이기 위해 다중의 캐시 서버를 설계하였다. 다중의 캐시 서버는 단일 캐시 서버를 사용할 때 발생하는 웹 서버의 부하를 감소시켜 네트웍 트래픽의 양을 줄여준다. 또한 다중의 캐시는 라우터, 디스크, 메모리 같은 하드웨어의 비용을 절감할 수 있어 경제적인 측면에서도 효율적이다.

  • PDF

Low-Power Cache Design by using Locality Buffer and Address Compression (지역 버퍼와 주소 압축을 통한 저전력 캐시 설계)

  • Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.11-19
    • /
    • 2013
  • Most modern computer systems employ cache systems in order to alleviate the access time gap between processor and memory system. The power dissipated by the cache systems becomes a significant part of the total power dissipated by whole microprocessor chip. Therefore, power reduction in the cache system becomes one of the important issues. Partial tag cache is the system for the least power consumption. The main power reduction for this method is due to the use of small partial tag matching, not full tag matching. In this paper, we first analyze the previous regular partial tag cache systems and propose a new address matching mechanism by using locality buffer and address compression. In simulation results, the proposed model shows 18% power reduction in average, still providing same performance level, compared to regular cache.

High Performance Data Cache Memory Architecture (고성능 데이터 캐시 메모리 구조)

  • Kim, Hong-Sik;Kim, Cheong-Ghil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.945-951
    • /
    • 2008
  • In this paper, a new high performance data cache scheme that improves exploitation of both the spatial and temporal locality is proposed. The proposed data cache consists of a hardware prefetch unit and two sub-caches such as a direct-mapped (DM) cache with a large block size and a fully associative buffer with a small block size. Spatial locality is exploited by fetching and storing large blocks into a direct mapped cache, and is enhanced by prefetching a neighboring block when a DM cache hit occurs. Temporal locality is exploited by storing small blocks from the DM cache in the fully associative buffer according to their activity in the DM cache when they are replaced. Experimental results on Spec2000 programs show that the proposed scheme can reduce the average miss ratio by $12.53%\sim23.62%$ and the AMAT by $14.67%\sim18.60%$ compared to the previous schemes such as direct mapped cache, 4-way set associative cache and SMI(selective mode intelligent) cache[8].

Effect of Microkernel Structure on Cache Memory Performance (마이크로커널 구조가 캐시 메모리의 성능에 미치는 영향)

  • Chang, Moon-Seok;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.1
    • /
    • pp.68-80
    • /
    • 2000
  • The modern software technology toward modularization has changed the cache accessing behavior dramatically. Many modern operating systems are also departing from the past monolithic structure toward the highly modularized structure referred to as microkernel. Microkernel-based operating systems are more portable and extensible, but are likely to have worse performance. This paper quantitatively analyzes the effect of microkernel structure on cache memory to identify the primary factor for its performance degradation. Through the experiment performed on a Intel Pentium Pro processor platform, we found that the microkernel structure suffers from remarkably higher misses for L1, L2 cache and TLB than the monolithic one does. We also found that the performance of a microkernel is more dependent on the efficiency of cache memory than IPC. Finally, we found that these results come from the effect of frequent context switches mainly caused by the structural feature of a microkernel.

  • PDF

An Address Translation Technique Large NAND Flash Memory using Page Level Mapping (페이지 단위 매핑 기반 대용량 NAND플래시를 위한 주소변환기법)

  • Seo, Hyun-Min;Kwon, Oh-Hoon;Park, Jun-Seok;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.371-375
    • /
    • 2010
  • SSD is a storage medium based on NAND Flash memory. Because of its short latency, low power consumption, and resistance to shock, it's not only used in PC but also in server computers. Most SSDs use FTL to overcome the erase-before-overwrite characteristic of NAND flash. There are several types of FTL, but page mapped FTL shows better performance than others. But its usefulness is limited because of its large memory footprint for the mapping table. For example, 64MB memory space is required only for the mapping table for a 64GB MLC SSD. In this paper, we propose a novel caching scheme for the mapping table. By using the mapping-table-meta-data we construct a fully associative cache, and translate the address within O(1) time. The simulation results show more than 80 hit ratio with 32KB cache and 90% with 512KB cache. The overall memory footprint was only 1.9% of 64MB. The time overhead of cache miss was measured lower than 2% for most workload.

Hybrid Main Memory based Buffer Cache Scheme by Using Characteristics of Mobile Applications (모바일 애플리케이션의 특성을 이용한 하이브리드 메모리 기반 버퍼 캐시 정책)

  • Oh, Chansoo;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1314-1321
    • /
    • 2015
  • Mobile devices employ buffer cache mechanisms, just as in computer systems such as desktops or servers, to mitigate the performance gap between main memory and secondary storage. However, DRAM has a problem in that it accelerates battery consumption by performing refresh operations periodically to maintain the stored data. In this paper, we propose a novel buffer cache scheme to increase the battery lifecycle in mobile devices based on a hybrid main memory architecture consisting of DRAM and non-volatile PCM. We also suggest a new buffer cache policy that allocates buffers based on process states to optimize the performance and endurance of PCM. In particular, our algorithm allocates each page to the appropriate position corresponding to the state of the application that owns the page, and tries to ensure a rapid response of foreground applications even with a small amount of DRAM memory. The experimental results indicate that the proposed scheme reduces the elapsed time of foreground applications by 58% on average and power consumption by 23% on average without negatively impacting the performance of background applications.

An Energy Efficient and High Performance Data Cache Structure Utilizing Tag History of Cache Addresses (캐시 주소의 태그 이력을 활용한 에너지 효율적 고성능 데이터 캐시 구조)

  • Moon, Hyun-Ju;Jee, Sung-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.55-62
    • /
    • 2007
  • Uptime of embedded processors for mobile devices are dependent on battery consumption. Especially the large portion of power consumption is known to be due to cache management in embedded processors. This paper proposes an energy efficient data cache structure for high performance embedded processors. High performance prefetching data cache issues prefetching instructions before issuing demand-fetch instructions based on reference predictions. These prefetching instruction bring reduction on memory delay by improving cache hit ratio, but on the other hand those increase energy consumption in proportion to the number of prefetching instructions. In this paper, we adopt tag history table on prefetching data cache for reducing energy consumption by minimizing parallel tag comparison. Experimental results show the proposed data cache improves performance on energy consumption as well as memory delay.