• Title/Summary/Keyword: buffer cache management

Search Result 36, Processing Time 0.024 seconds

A Study on Large Data File Management Using Buffer Cache and Virtual Memory File (가상메모리 화일과 버퍼캐쉬를 이용한 대형 데이타 화일의 처리에 관한 연구)

  • Kim, Byeong-Chul;Shin, Byeong-Seok;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1991.11a
    • /
    • pp.185-188
    • /
    • 1991
  • In this paper we have designed and implemented a method of using extended memory and hard disk space as a data buffer for application programs to allow handling of large data files in DOS environment. We use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows some speed improvement for the application program. We have also implemented a number of functions to allow easier handling of pointer operations used by application programs.

  • PDF

Improving Periodic Flush Overhead of File Systems Using Non-volatile Buffer Cache (비휘발성 버퍼 캐시를 이용한 파일 시스템의 주기적인 flush 오버헤드 개선)

  • Lee, Eunji;Kang, Hyojung;Koh, Kern;Bahn, Hyokyung
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.878-884
    • /
    • 2014
  • File I/O buffer cache plays an important role in narrowing the wide speed gap between the main memory and the secondary storage. However, data loss or inconsistencies may occur if the system crashes before the data that has been updated in the buffer cache is flushed to storage. Thus, most operating systems adopt a daemon that periodically flushes dirty data to the secondary storage. In this study, we show that periodic flushes account for 30-70% of the total write traffic to storage and remove this inefficiency by implementing a small, non-volatile buffer cache. Specifically, we present space-efficient management techniques, such as delta-write and fragment-grouping, and show that the storage write traffic and throughput can be improved by a margin of 44.2% and 23.6%, respectively, with only a small NVRAM.

MLC-LFU : The Multi-Level Buffer Cache Management Policy for Flash Memory (MLC-LFU : 플래시 메모리를 위한 멀티레벨 버퍼 캐시 관리 정책)

  • Ok, Dong-Seok;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.14-20
    • /
    • 2009
  • Recently, NAND flash memory is used not only for portable devices, but also for personal computers and server computers. Buffer cache replacement policies for the hard disks such as LRU and LFU are not good for NAND flash memories because they do not consider about the characteristics of NAND flash memory. CFLRU and its variants, CFLRU/C, CFLRU/E and DL-CFLRU/E(CFLRUs) are the buffer cache replacement policies considered about the characteristics of NAND flash memories, but their performances are not better than those of LRD. In this paper, we propose a new buffer cache replacement policy for NAND flash memory. Which is based on LFU and is taking into account the characteristics of NAND flash memory. And we estimate the performance of hit ratio and flush operation numbers. The proposed policy shows better hit ratio and the number of flush operation than any other policies.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Efficient Buffer Allocation Policy for the Adaptive Block Replacement Scheme (적응력있는 블록 교체 기법을 위한 효율적인 버퍼 할당 정책)

  • Choi, Jong-Moo;Cho, Seong-Je;Noh, Sam-Hyuk;Min, Sang-Lyul;Cho, Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.3
    • /
    • pp.324-336
    • /
    • 2000
  • The paper proposes an efficient buffer management scheme to enhance performance of the disk I/O system. Without any user level information, the proposed scheme automatically detects the block reference patterns of applications by associating block attributes with forward distance of a block. Based on the detected patterns, the scheme applies an appropriate replacement policy to each application. We also present a new block allocation scheme to improve the performance of buffer cache when kernel needs to allocate a cache block due to a cache miss. The allocation scheme analyzes the cache hit ratio of each application based on block reference patterns and allocates a cache block to maximize cache hit ratios of system. These all procedures are performed on-line, as well as automatically at system level. We evaluate the scheme by trace-driven simulation. Experimental results show that our scheme leads to significant improvements in hit ratios of cache blocks compared to the traditional schemes and requires low overhead.

  • PDF

Buffer Cache Management based on Nonvolatile Memory to Improve the Performance of Smartphone Storage (스마트폰 저장장치의 성능개선을 위한 비휘발성메모리 기반의 버퍼캐쉬 관리)

  • Choi, Hyunkyoung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.7-12
    • /
    • 2016
  • DRAM is commonly used as a smartphone memory medium, but extending its capacity is challenging due to DRAM's large battery consumption and density limit. Meanwhile, smartphone applications such as social network services need increasingly large memory, resulting in long latency due to additional storage accesses. To alleviate this situation, we adopt emerging nonvolatile memory (NVRAM) as smartphone's buffer cache and propose an efficient management scheme. The proposed scheme stores all dirty data in NVRAM, thereby reducing the number of storage accesses. Moreover, it separately exploits read and write histories of data accesses, leading to more efficient management of volatile and nonvolatile buffer caches, respectively. Trace-driven simulations show that the proposed scheme improves I/O performances significantly.

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

Low Power Trace Cache for Embedded Processor

  • Moon Je-Gil;Jeong Ha-Young;Lee Yong-Surk
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.204-208
    • /
    • 2004
  • Embedded business will be expanded market more and more since customers seek more wearable and ubiquitous systems. Cellular telephones, PDAs, notebooks and portable multimedia devices could bring higher microprocessor revenues and more rewarding improvements in performance and functions. Increasing battery capacity is still creeping along the roadmap. Until a small practical fuel cell becomes available, microprocessor developers must come up with power-reduction methods. According to MPR 2003, the instruction and data caches of ARM920T processor consume $44\%$ of total processor power. The rest of it is split into the power consumptions of the integer core, memory management units, bus interface unit and other essential CPU circuitry. And the relationships among CPU, peripherals and caches may change in the future. The processor working on higher operating frequency will exact larger cache RAM and consume more energy. In this paper, we propose advanced low power trace cache which caches traces of the dynamic instruction stream, and reduces cache access times. And we evaluate the performance of the trace cache and estimate the power of the trace cache, which is compared with conventional cache.

  • PDF

A Comparative Analysis on Page Caching Strategies Affecting Energy Consumption in the NAND Flash Translation Layer (NAND 플래시 변환 계층에서 전력 소모에 영향을 미치는 페이지 캐싱 전략의 비교·분석)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.3
    • /
    • pp.109-116
    • /
    • 2018
  • SSDs that are not allowed in-place update within the allocated page cause another allocation of a new page that will replace the previous page at the moment data modification occurs. This intrinsic characteristic of SSDs requires many changes to the existing HDD-based IO theory. In this paper, we conduct a performance comparison of FTL caching strategy in perspective of cache hashing (Global vs. grouped) and caching algorithm (LRU vs. NUR) through a simulation. Experimental results show that in terms of energy consumption for flash operation the grouped management of cache is not suitable and NUR algorithm is superior to LRU algorithm. In particular, we found that the cache hit ratio of LRU algorithm is about 10% point higher than that of NUR algorithm while the energy consumption of LRU algorithm is about 32% high.

An Efficient Caching Algorithm to Minimize Duplicated Disk Blocks in 2-level Disk Cache System (2-레벨 디스크 캐쉬 시스템에서 디스크 블록 중복 저장을 최소화하는 효율적인 캐싱 알고리즘)

  • 류갑상;정수목
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.1
    • /
    • pp.57-64
    • /
    • 2004
  • The speed gap between processors and disks is a serious problem. So, I/O sub-system limits the performance of computer system. To overcome the speed gap, caches have been used in computer system. By using cache, the access times to disk blocks can be reduced and the performance of computer system can be improved. In this paper, we proposed an efficient cache management algorithm for computer system which have buffer cache and disk cache. The proposed algorithm can minimize the duplicated blocks between buffet cache and disk cache. We evaluate the proposed algorithm by trace-driven simulation. The simulation results show that the proposed algorithm can reduce the mean access time to disk blocks.

  • PDF