• Title/Summary/Keyword: DRAM buffer

Search Result 29, Processing Time 0.02 seconds

Minimizing method of initial time for ECC DRAM (ECC를 적용한 DRAM의 초기화 시간 최소화 방법)

  • Roh, Jong-Sung;Kim, Jong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.446-448
    • /
    • 2006
  • DRAM with ECC is used widely and the size of DRAW increases. According to this, DRAM initial time, especially the time to make the whole area typical value, 0, increases. This paper introduces the method that without any additional hardware, using characteristic of DRAM and DRAM controller, minimize that memory initial time. Conservative reordering - it eliminates DRAM read time and makes write buffer used - reduces initial time to make the whole DRAM area 0, by 95.36% for DDR DRAM. 9341% for Rambus DRAM.

  • PDF

Performance Evaluation of HMB-Supported DRAM-Less NVMe SSDs (HMB를 지원하는 DRAM-Less NVMe SSD의 성능 평가)

  • Kim, Kyu Sik;Kim, Tae Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.7
    • /
    • pp.159-166
    • /
    • 2019
  • Unlike modern Solid-State Drives with DRAM, DRAM-less SSDs do not have DRAM because they are cheap and consume less power. Obviously, they have performance degradation problem due to lack of DRAM in the controller and this problem can be alleviated by utilizing host memory buffer(HMB) feature of NVMe, which allows SSDs to utilize the DRAM of host. In this paper, we show that commercial DRAM-less SSDs surely exhibit lower I/O performance than other SSDs with DRAM, but they can be improved by utilizing the HMB feature. Through various experiments and analysis, we also show that DRAM-less SSDs mainly exploit the DRAM of host as mapping table cache rather than read cache or write buffer to improve I/O performance.

Efficient DRAM Buffer Access Scheduling Techniques for SSD Storage System (SSD 스토리지 시스템을 위한 효율적인 DRAM 버퍼 액세스 스케줄링 기법)

  • Park, Jun-Su;Hwang, Yong-Joong;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.7
    • /
    • pp.48-56
    • /
    • 2011
  • Recently, new storage device SSD(Solid State Disk) based on NAND flash memory is gradually replacing HDD(Hard Disk Drive) in mobile device and thus a variety of research efforts are going on to find the cost-effective ways of performance improvement. By increasing the NAND flash channels in order to enhance the bandwidth through parallel processing, DRAM buffer which acts as a buffer cache between host(PC) and NAND flash has become the bottleneck point. To resolve this problem, this paper proposes an efficient low-cost scheme to increase SSD performance by improving DRAM buffer bandwidth through scheduling techniques which utilize DRAM multi-banks. When both host and NAND flash multi-channels request access to DRAM buffer concurrently, the proposed technique checks their destination and then schedules appropriately considering properties of DRAMs. It can reduce overheads of bank active time and row latency significantly and thus optimizes DRAM buffer bandwidth utilization. The result reveals that the proposed technique improves the SSD performance by 47.4% in read and 47.7% in write operation respectively compared to conventional methods with negligible changes and increases in the hardware.

Research on the Main Memory Access Count According to the On-Chip Memory Size of an Artificial Neural Network (인공 신경망 가속기 온칩 메모리 크기에 따른 주메모리 접근 횟수 추정에 대한 연구)

  • Cho, Seok-Jae;Park, Sungkyung;Park, Chester Sungchung
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.180-192
    • /
    • 2021
  • One widely used algorithm for image recognition and pattern detection is the convolution neural network (CNN). To efficiently handle convolution operations, which account for the majority of computations in the CNN, we use hardware accelerators to improve the performance of CNN applications. In using these hardware accelerators, the CNN fetches data from the off-chip DRAM, as the massive computational volume of data makes it difficult to derive performance improvements only from memory inside the hardware accelerator. In other words, data communication between off-chip DRAM and memory inside the accelerator has a significant impact on the performance of CNN applications. In this paper, a simulator for the CNN is developed to analyze the main memory or DRAM access with respect to the size of the on-chip memory or global buffer inside the CNN accelerator. For AlexNet, one of the CNN architectures, when simulated with increasing the size of the global buffer, we found that the global buffer of size larger than 100kB has 0.8x as low a DRAM access count as the global buffer of size smaller than 100kB.

New buffer mapping method for Hybrid SPM with Buffer sharing (하이브리드 SPM을 위한 버퍼 공유를 활용한 새로운 버퍼 매핑 기법)

  • Lee, Daeyoung;Oh, Hyunok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.4
    • /
    • pp.209-218
    • /
    • 2016
  • This paper proposes a new lifetime aware buffer mapping method of a synchronous dataflow (SDF) graph on a hybrid memory system with DRAM and PRAM. Since the number of write operations on PRAM is limited, the number of written samples on PRAM is minimized to maximize the lifetime of PRAM. We improve the utilization of DRAM by mapping more buffers on DRAM through buffer sharing. The problem is formulated formally and solved by an optimal approach of an answer set programming. In experiment, the buffer mapping method with buffer sharing improves the PRAM lifetime by 63%.

Hybrid Main Memory based Buffer Cache Scheme by Using Characteristics of Mobile Applications (모바일 애플리케이션의 특성을 이용한 하이브리드 메모리 기반 버퍼 캐시 정책)

  • Oh, Chansoo;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1314-1321
    • /
    • 2015
  • Mobile devices employ buffer cache mechanisms, just as in computer systems such as desktops or servers, to mitigate the performance gap between main memory and secondary storage. However, DRAM has a problem in that it accelerates battery consumption by performing refresh operations periodically to maintain the stored data. In this paper, we propose a novel buffer cache scheme to increase the battery lifecycle in mobile devices based on a hybrid main memory architecture consisting of DRAM and non-volatile PCM. We also suggest a new buffer cache policy that allocates buffers based on process states to optimize the performance and endurance of PCM. In particular, our algorithm allocates each page to the appropriate position corresponding to the state of the application that owns the page, and tries to ensure a rapid response of foreground applications even with a small amount of DRAM memory. The experimental results indicate that the proposed scheme reduces the elapsed time of foreground applications by 58% on average and power consumption by 23% on average without negatively impacting the performance of background applications.

A Buffer Cache Replacement Algorithm for Considering both Hybrid Main Memory and Storage (하이브리드 메인 메모리와 스토리지의 특성을 고려한 버퍼 캐시 교체 정책)

  • Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.947-953
    • /
    • 2015
  • PRAM is being considered as a potential successor to DRAM because of its characteristics such as byte-addressability, non-volatility, and high density. To gain its benefits, buffer cache replacement algorithm based on PRAM has been actively studied. However, most of the previous studies on buffer cache replacement algorithm limitedly exploit the byte-level performance of PRAM by focusing its limited lifetime and slower access latency compared to DRAM. In this paper, we propose a novel buffer cache replacement algorithm that fully considers the byte-level performance of PRAM and the performance of secondary storage. To take advantage of small size write on PRAM, proposed scheme keeps pages, which are frequently accessed with a small size write, on PRAM and allows the selective page migration from DRAM to PRAM. As a result, our scheme significantly reduces the number of PRAM writes. Our experimental results indicate for real workloads that our scheme reduces the number of PRAM writes by up to 92% and improves its performance by up to 62% compared to CLOCK.

PCM Main Memory for Low Power Embedded System (저전력 내장형 시스템을 위한 PCM 메인 메모리)

  • Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.6
    • /
    • pp.391-397
    • /
    • 2015
  • Nonvolatile memories in memory hierarchy have been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU. In this paper, we study the use of a new type of nonvolatile memories - the Phase Change Memory (PCM) with a DRAM buffer system as the main memory. Our design reduced the total energy of a DRAM main memory of the same capacity by 80%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.

A study to improve the frame buffer access bandwidth (프레임 버퍼 액세스 대역폭 개선에 관한 연구)

  • Mun, Sang-Ho;Gang, Hyeon-Seok;Park, Gil-Heum
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.2
    • /
    • pp.407-415
    • /
    • 1996
  • This paper introduces two schemes to improve the frame buffer access bandwidth. The first scheme suggests a rasterizer called SBUFRE that has Span Z Buffer and Span Z& Color Buffer within a rasterizer. The second scheme suggests a ZDRAM that has Z-value comparator within the DRAM. These schemes are to convert read- modify-write Z buffer compare into single write only operation that improves approximately 50% frame buffer access bandwidth.

  • PDF

DRAM Buffer Data Management Techniques to Enhance SSD Performance (SSD 성능 향상을 위한 DRAM 버퍼 데이터 처리 기법)

  • Im, Kwang-Seok;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.7
    • /
    • pp.57-64
    • /
    • 2011
  • To adjust the difference of bandwidth between host interface and NAND flash memory, DRAM is adopted as the buffer management in SSD (Solid-state Disk). In this paper, we propose cost-effective techniques to enhance SSD performance instead of using expensive high bandwidth DRAM. The SSD data can be classified into three groups such as user data, meta data for handling user data, and FEC(Forward Error Correction) parity/ CRC(Cyclic Redundancy Check) for error control. In order to improve the performance by considering the features of each data, we devise a flexible burst control method through monitoring system and a page based FEC parity/CRC application. Experimental results show that proposed methods enhance the SSD performance up to 25.9% with a negligible 0.07% increase in chip size.