• Title/Summary/Keyword: Write

Search Result 1,585, Processing Time 0.026 seconds

Dead Block-Aware Adaptive Write Scheme for MLC STT-MRAM Caches

  • Hong, Seokin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.1-9
    • /
    • 2020
  • In this paper, we propose an efficient adaptive write scheme that improves the performance of write operation in MLC STT-MRAM caches. The key idea of the proposed scheme is to perform the write operation fast if the target MLC STT-MRAM cells contain a dead block. Even if the fast write operation on the MLC STT-MRAM evicts a cache block from the MLC STT-MRAM cells, its performance impact is low if the evicted block is a dead block which is not used in the future. Through experimental evaluation with a memory simulator, we show that the proposed adaptive write scheme improves the performance of the MLC STT-MRAM caches by 17% on average.

CAWR: Buffer Replacement with Channel-Aware Write Reordering Mechanism for SSDs

  • Wang, Ronghui;Chen, Zhiguang;Xiao, Nong;Zhang, Minxuan;Dong, Weihua
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.147-156
    • /
    • 2015
  • A typical solid-state drive contains several independent channels that can be operated in parallel. To exploit this channel-level parallelism, a variety of works proposed to split consecutive write sequences into small segments and schedule them to different channels. This scheme exploits the parallelism but breaks the spatial locality of write traffic; thus, it is able to significantly degrade the efficiency of garbage collection. This paper proposes a channel-aware write reordering (CAWR) mechanism to schedule write requests to different channels more intelligently. The novel mechanism encapsulates correlated pages into a cluster beforehand. All pages belonging to a cluster are scheduled to the same channels to exploit spatial locality, while different clusters are scheduled to different channels to exploit the parallelism. As CAWR covers both garbage collection and I/O performance, it outperforms existing schemes significantly. Trace-driven simulation results demonstrate that the CAWR mechanism reduces the average response time by 26% on average and decreases the valid page copies by 10% on average, while achieving a similar hit ratio to that of existing mechanisms.

Writing papers: literary and scientific

  • Hwang, Kun
    • Journal of Trauma and Injury
    • /
    • v.35 no.3
    • /
    • pp.145-150
    • /
    • 2022
  • This paper aims to summarize why I write, how to find a motif, and how to polish and finish a manuscript. For William Carlos Williams, practicing medicine and writing poetry were two parts of a single whole, not each of the other. The two complemented each other. Medicine stimulated Williams to become a poet, while poetry was also the driving force behind his role as a doctor. Alexander Pope, the 18th century English poet, wrote a poem entitled "The Epistle to Dr. Arbuthnot" that was dedicated to a friend who was both a poet and a physician. In this poem, we receive an answer to the questions of "Why do you write? Why do you publish?" Pope writes, "Happy my studies, when by these approv'd! / Happier their author, when by these belov'd! / From these the world will judge of men and books." When I write, I first reflect on whether I only want to write something for its own sake, like "a dog chasing its own tail," instead of making a more worthwhile contribution. When my colleagues ask me, "Why do you write essays as well as scientific papers?" I usually answer, "Writing is a process of healing for me-I cannot bear myself unless I write." When the time comes to sit down and put pen to paper, I remind myself of the saying, festina lente (in German, Ohne Hast, aber ohne Rast, corresponding to the English proverb "more haste, less speed"). If I am utterly exhausted when I finish writing, then I know that I have had my vision.

Energy-Performance Efficient 2-Level Data Cache Architecture for Embedded System (내장형 시스템을 위한 에너지-성능 측면에서 효율적인 2-레벨 데이터 캐쉬 구조의 설계)

  • Lee, Jong-Min;Kim, Soon-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.292-303
    • /
    • 2010
  • On-chip cache memories play an important role in both performance and energy consumption points of view in resource-constrained embedded systems by filtering many off-chip memory accesses. We propose a 2-level data cache architecture with a low energy-delay product tailored for the embedded systems. The L1 data cache is small and direct-mapped, and employs a write-through policy. In contrast, the L2 data cache is set-associative and adopts a write-back policy. Consequently, the L1 data cache is accessed in one cycle and is able to provide high cache bandwidth while the L2 data cache is effective in reducing global miss rate. To reduce the penalty of high miss rate caused by the small L1 cache and power consumption of address generation, we propose an ECP(Early Cache hit Predictor) scheme. The ECP predicts if the L1 cache has the requested data using both fast address generation and L1 cache hit prediction. To reduce high energy cost of accessing the L2 data cache due to heavy write-through traffic from the write buffer laid between the two cache levels, we propose a one-way write scheme. From our simulation-based experiments using a cycle-accurate simulator and embedded benchmarks, the proposed 2-level data cache architecture shows average 3.6% and 50% improvements in overall system performance and the data cache energy consumption.

Trickle Write-Back Scheme for Cache Management in Mobile Computing Environments (?이동 컴퓨팅 환경에서 캐쉬 관리를 위한 TWB 기법)

  • Kim, Moon-Jeong;Eom, Young-Ik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.1
    • /
    • pp.89-100
    • /
    • 2000
  • Recently, studies on the mobile computing environments that enable mobile hosts to move while retaining its network connection are in progress. In these mobile computing environments, one of the necessary components is the distributed file system supporting mobile hosts, and there are several issues for the design and implementation of the shared file system. Among these issues, there are problems caused by network traffic on limited bandwidth of wireless media. Also, there are consistency maintenance issues that are caused by update-conflicts on the shared files in the distributed file system. In this paper, we propose TWB(Trickle Write-Back) scheme that utilizes weak connectivity for cache management of mobile clients. This scheme focuses on saving bandwidth, reducing waste of disk space, and reducing risks caused by disconnection. For such goals, this scheme lets mobile clients write back intermediate states periodically or on demand while delaying unnecessary write-backs. Meanwhile, this scheme is based on the existing distributed file system architecture and provides transparency.

  • PDF

Design of an Asynchronous Data Cache with FIFO Buffer for Write Back Mode (Write Back 모드용 FIFO 버퍼 기능을 갖는 비동기식 데이터 캐시)

  • Park, Jong-Min;Kim, Seok-Man;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.72-79
    • /
    • 2010
  • In this paper, we propose the data cache architecture with a write buffer for a 32bit asynchronous embedded processor. The data cache consists of CAM and data memory. It accelerates data up lood cycle between the processor and the main memory that improves processor performance. The proposed data cache has 8 KB cache memory. The cache uses the 4-way set associative mapping with line size of 4 words (16 bytes) and pseudo LRU replacement algorithm for data replacement in the memory. Dirty register and write buffer is used for write policy of the cache. The designed data cache is synthesized to a gate level design using $0.13-{\mu}m$ process. Its average hit rate is 94%. And the system performance has been improved by 46.53%. The proposed data cache with write buffer is very suitable for a 32-bit asynchronous processor.

Analyzing Virtual Memory Write Characteristics and Designing Page Replacement Algorithms for NAND Flash Memory (NAND 플래시메모리를 위한 가상메모리의 쓰기 참조 분석 및 페이지 교체 알고리즘 설계)

  • Lee, Hye-Jeong;Bahn, Hyo-Kyung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.6
    • /
    • pp.543-556
    • /
    • 2009
  • Recently, NAND flash memory is being used as the swap device of virtual memory as well as the file storage of mobile systems. Since temporal locality is dominant in page references of virtual memory, LRU and its approximated CLOCK algorithms are widely used. However, cost of a write operation in flash memory is much larger than that of a read operation, and thus a page replacement algorithm should consider this factor. This paper analyzes virtual memory read/write reference patterns individually, and observes the ranking inversion problem of temporal locality in write references which is not observed in read references. With this observation, we present a new page replacement algorithm considering write frequency as well as temporal locality in estimating write reference behaviors. This new algorithm dynamically allocates memory space to read/write operations based on their reference patterns and I/O costs. Though the algorithm has no external parameter to tune, it supports optimized implementations for virtual memory systems, and also performs 20-66% better than CLOCK, CAR, and CFLRU algorithms.

The Early Write Back Scheme For Write-Back Cache (라이트 백 캐쉬를 위한 빠른 라이트 백 기법)

  • Chung, Young-Jin;Lee, Kil-Whan;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.101-109
    • /
    • 2009
  • Generally, depth cache and pixel cache of 3D graphics are designed by using write-back scheme for efficient use of memory bandwidth. Also, there are write after read operations of same address or only write operations are occurred frequently in 3D graphics cache. If a cache miss is detected, an access to the external memory for write back operation and another access to the memory for handling the cache miss are operated simultaneously. So on frequent cache miss situations, as the memory access bandwidth limited, the access time of the external memory will be increased due to memory bottleneck problem. As a result, the total performance of the processor or the IP will be decreased, also the problem will increase peak power consumption. So in this paper, we proposed a novel early write back cache architecture so as to solve the problems issued above. The proposed architecture controls the point when to access the external memory as to copy the valid data block. And this architecture can improve the cache performance with same hit ratio and same capacity cache. As a result, the proposed architecture can solve the memory bottleneck problem by preventing intensive memory accesses. We have evaluated the new proposed architecture on 3D graphics z cache and pixel cache on a SoC environment where ARM11, 3D graphic accelerator and various IPs are embedded. The simulation results indicated that there were maximum 75% of performance increase when using various simulation vectors.

An Advanced Embedded SRAM Cell with Expanded Read/Write Stability and Leakage Reduction

  • Chung, Yeon-Bae
    • Journal of IKEEE
    • /
    • v.16 no.3
    • /
    • pp.265-273
    • /
    • 2012
  • Data stability and leakage power dissipation have become a critical issue in scaled SRAM design. In this paper, an advanced 8T SRAM cell improving the read and write stability of data storage elements as well as reducing the leakage current in the idle mode is presented. During the read operation, the bit-cell keeps the noise-vulnerable data 'low' node voltage close to the ground level, and thus producing near-ideal voltage transfer characteristics essential for robust read functionality. In the write operation, a negative bias on the cell facilitates to change the contents of the bit. Unlike the conventional 6T cell, there is no conflicting read and write requirement on sizing the transistors. In the standby mode, the built-in stacked device in the 8T cell reduces the leakage current significantly. The 8T SRAM cell implemented in a 130 nm CMOS technology demonstrates almost 100 % higher read stability while bearing 20 % better write-ability at 1.2 V typical condition, and a reduction by 45 % in leakage power consumption compared to the standard 6T cell. The stability enhancement and leakage power reduction provided with the proposed bit-cell are confirmed under process, voltage and temperature variations.

Hot Data Identification For Flash Based Storage Systems Considering Continuous Write Operation

  • Lee, Seung-Woo;Ryu, Kwan-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.2
    • /
    • pp.1-7
    • /
    • 2017
  • Recently, NAND flash memory, which is used as a storage medium, is replacing HDD (Hard Disk Drive) at a high speed due to various advantages such as fast access speed, low power, and easy portability. In order to apply NAND flash memory to a computer system, a Flash Translation Layer (FTL) is indispensably required. FTL provides a number of features such as address mapping, garbage collection, wear leveling, and hot data identification. In particular, hot data identification is an algorithm that identifies specific pages where data updates frequently occur. Hot data identification helps to improve overall performance by identifying and managing hot data separately. MHF (Multi hash framework) technique, known as hot data identification technique, records the number of write operations in memory. The recorded value is evaluated and judged as hot data. However, the method of counting the number of times in a write request is not enough to judge a page as a hot data page. In this paper, we propose hot data identification which considers not only the number of write requests but also the persistence of write requests.