• Title/Summary/Keyword: Memory Page

Search Result 239, Processing Time 0.022 seconds

Recency and Frequency based Page Management on Hybrid Main Memory

  • Kim, Sungho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2018
  • In this paper, we propose a new page replacement policy using recency and frequency on hybrid main memory. The proposal has two features. First, when a page fault occurs in the main memory, the proposal allocates it to DRAM, regardless of operation types such as read or write. The page allocated by the page fault is likely to be high probability of re-reference in the near future. Our allocation can reduce the frequency of write operations in PCM. Second, if the write operations are frequently performed on pages of PCM, the pages are migrated from PCM to DRAM. Otherwise, the pages are maintained in PCM, to reduce the number of unnecessary page migrations from PCM. In our experiments, the proposal reduced the number of page migrations from PCM about 32.12% on average and reduced the number of write operations in PCM about 44.64% on average, compared to CLOCK-DWF. Moreover, the proposal reduced the energy consumption about 15.61%, and 3.04%, compared to other page replacement policies.

High Performance PCM&DRAM Hybrid Memory System (고성능 PCM&DRAM 하이브리드 메모리 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.117-123
    • /
    • 2016
  • In general, PCM (Phase Change Memory) is unsuitable as a main memory because it has limitations: high read/write latency and low endurance. However, the DRAM&PCM hybrid memory with the same level is one of the effective structures for a next generation main memory because it can utilize an advantage of both DRAM and PCM. Therefore, it needs an effective page management method for exploiting each memory characteristics dynamically and adaptively. So we aim reducing an access time and write count of PCM by using an effective page replacement. According to our simulation, the proposed algorithm for the DRAM&PCM hybrid can reduce the PCM access count by around 60% and the PCM write count by 42% given the same PCM size, compared with Clock-DWF algorithm.

Development of Flash Memory Page Management Techniques

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.631-644
    • /
    • 2018
  • Many studies on flash memory-based buffer replacement algorithms that consider the characteristics of flash memory have recently been developed. Conventional flash memory-based buffer replacement algorithms have the disadvantage that the operation speed slows down, because only the reference is checked when selecting a replacement target page and either the reference count is not considered, or when the reference time is considered, the elapsed time is considered. Therefore, this paper seeks to solve the problem of conventional flash memory-based buffer replacement algorithm by dividing pages into groups and considering the reference frequency and reference time when selecting the replacement target page. In addition, because flash memory has a limited lifespan, candidates for replacement pages are selected based on the number of deletions.

Page Replacement Policy for Memory Load Adaption to Reduce Storage Writes and Page Faults (스토리지 쓰기량과 페이지 폴트를 줄이는 메모리 부하 적응형 페이지 교체 정책)

  • Bahn, Hyokyung;Park, Yunjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.57-62
    • /
    • 2022
  • Recently, fast storage media such as phage-change memory (PCM) emerge, and memory management policies for slow disk storage need to be revisited. In this paper, we propose a new page replacement policy that makes use of PCM as a swap device of virtual memory systems. The proposed policy aims at reducing write traffic to the swap device as well as reducing the number of page faults pursued by traditional page replacement policies. This is because a write operation in PCM is slow and PCM has limited write endurances. Specifically, the proposed policy focuses on the reduction of page faults when the memory load of the system is high, but it aims at reducing write traffic to storage when free memory space is sufficient. Simulation experiments with various memory reference traces show that the proposed policy reduces write traffic to PCM without performance degradations.

Page Replacement Algorithm for Improving Performance of Hybrid Main Memory (하이브리드 메인 메모리의 성능 향상을 위한 페이지 교체 기법)

  • Lee, Minhoe;Kang, Dong Hyun;Kim, Junghoon;Eom, Young Ik
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.88-93
    • /
    • 2015
  • In modern computer systems, DRAM is commonly used as main memory due to its low read/write latency and high endurance. However, DRAM is volatile memory that requires periodic power supply (i.e., memory refresh) to sustain the data stored in it. On the other hand, PCM is a promising candidate for replacement of DRAM because it is non-volatile memory, which could sustain the stored data without memory refresh. PCM is also available for byte-addressable access and in-place update. However, PCM is unsuitable for using main memory of a computer system because it has two limitations: high read/write latency and low endurance. To take the advantage of both DRAM and PCM, a hybrid main memory, which consists of DRAM and PCM, has been suggested and actively studied. In this paper, we propose a novel page replacement algorithm for hybrid main memory. To cope with the weaknesses of PCM, our scheme focuses on reducing the number of PCM writes in the hybrid main memory. Experimental results shows that our proposed page replacement algorithm reduces the number of PCM writes by up to 80.5% compared with the other page replacement algorithms.

An Optimum Paged Interleaving Memory by a Hierarchical Bit Line (계층 비트라이에 의한 최적 페이지 인터리빙 메모리)

  • 조경연;이주근
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.6
    • /
    • pp.901-909
    • /
    • 1990
  • With a wide spread of 32 bit personal computers, a simple structure and high performance memory system have been highly required. In this paper, a memory block is constructed by using a modified hierarchical bit line in which the DRAM bit line and the latch which works as a SRAM cell are integrated by an interface gate. And the new architecture memory DSRAM(Dynamic Static RAM) is proposed by interleaving the 16 memory block. Because the DSRAM works with 16 page, the page is miss ratio becomes small and the RAS precharge time which is incurred by page miss is shortened. So the DSRAM can implement an optimum page interleaving and it has good compatibility to the existing DRAMs. The DSRAM can be widely used in small computers as well as a high performance memory system.

  • PDF

A Study on Demand Paging For NAND Flash Memory Storages (NAND 플래시 메모리 저장장치를 위한 요구 페이징 기법 연구)

  • Yoo, Yoon-Suk;Ryu, Yeon-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.5
    • /
    • pp.583-593
    • /
    • 2007
  • We study the page replacement algorithms for demand paging, called CFLRU/C, CFLRU/E and DL-CFLRU/E, that reduce the number of erase operations and improve the wear-leveling degree of flash memory. Under the CFLRU/C and CFLRU/E algorithms, the victim page is the least recently used dean page within the pre-specified window. However, when there is not any dean page within the window, the CFLRU/C evicts the dirty page with the lowest frequency while the CFLRU/E evicts the dirty page with the highest number of erase operations. The DL-CFLRU/E algorithm maintains two page lists called the dean page list and the dirty page list, and first finds the page within the dean page list when it selects a victim. However, when it can not find any dean page within the dean page list, it evicts the dirty page with the highest number of erase operations within the window of the dirty page list. In this thesis, we show through simulation that the proposed schemes reduce the number of erase operations and improve the wear-leveling than the existing schemes like LRU.

  • PDF

Active Page Replacement Policy for DRAM & PCM Hybrid Memory System (DRAM&PCM 하이브리드 메모리 시스템을 위한 능동적 페이지 교체 정책)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.5
    • /
    • pp.261-268
    • /
    • 2018
  • Phase Change Memory(PCM) with low power consumption and high integration attracts attention as a next generation nonvolatile memory replacing DRAM. However, there is a problem that PCM has long latency and high energy consumption due to the writing operation. The PCM & DRAM hybrid memory structure is a fruitful structure that can overcome the disadvantages of such PCM. However, the page replacement algorithm is important, because these structures use two memory of different characteristics. The purpose of this document is to effectively manage pages that can be referenced in memory, taking into account the characteristics of DRAM and PCM. In order to manage these pages, this paper proposes an page replacement algorithm based on frequently accessed and recently paged. According to our simulation, the proposed algorithm for the DRAM&PCM hybrid can reduce the energy-delay product by around 10%, compared with Clock-DWF and CLOCK-HM.

(PMU (Performance Monitoring Unit)-Based Dynamic XIP(eXecute In Place) Technique for Embedded Systems) (내장형 시스템을 위한 PMU (Performance Monitoring Unit) 기반 동적 XIP (eXecute In Place) 기법)

  • Kim, Dohun;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.158-166
    • /
    • 2008
  • These days, mobile embedded systems adopt flash memory capable of XIP feature since they can reduce memory usage, power consumption, and software load time. XIP provides direct access to ROM and flash memory for processors. However, using XIP incurs unnecessary degradation of applications' performance because direct access to ROM and flash memory shows more delay than that to main memory. In this paper, we propose a memory management framework, dynamic XIP, which can resolve the performance degradation of using XIP. Using a constrained RAM cache, dynamic XIP can dynamically change XIP region according to page access pattern to reduce performance degradation in execution time or energy consumption resulting from native XIP problem. The proposed framework consists of a page profiler gathering applications' memory access pattern using PMU and an XIP manager deciding that a page is accessed whether in main memory or in flash memory. The proposed framework is implemented and evaluated in Linux kernel. Our evaluation shows that our framework can reduce execution time at most 25% and energy consumption at most 22% compared with using XIP-only case adopted in general mobile embedded systems. Moreover, the evaluation shows that in execution time and energy consumption, our modified LRU algorithm with code page filters can reduce more than at most 90% and 80% respectively compared with applying just existing LRU algorithm to dynamic XIP.

  • PDF

Effect of ASLR on Memory Duplicate Ratio in Cache-based Virtual Machine Live Migration

  • Piao, Guangyong;Oh, Youngsup;Sung, Baegjae;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.4
    • /
    • pp.205-210
    • /
    • 2014
  • Cache based live migration method utilizes a cache, which is accessible to both side (remote and local), to reduce the virtual machine migration time, by transferring only irredundant data. However, address space layout randomization (ASLR) is proved to reduce the memory duplicate ratio between targeted migration memory and the migration cache. In this pager, we analyzed the behavior of ASLR to find out how it changes the physical memory contents of virtual machines. We found that among six virtual memory regions, only the modification to stack influences the page-level memory duplicate ratio. Experiments showed that: (1) the ASLR does not shift the heap region in sub-page level; (2) the stack reduces the duplicate page size among VMs which performed input replay around 40MB, when ASLR was enabled; (3) the size of memory pages, which can be reconstructed from the fresh booted up state, also reduces by about 60MB by ASLR. With those observations, when applying cache-based migration method, we can omit the stack region. While for other five regions, even a coarse page-level redundancy data detecting method can figure out most of the duplicate memory contents.