• Title/Summary/Keyword: DRAM buffer

Search Result 29, Processing Time 0.021 seconds

Special Memory Design for Graphics (그래픽스 전용 메모리 설계)

  • 김성진;문상호
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.1
    • /
    • pp.80-88
    • /
    • 1999
  • In this paper, we propose a Special Memory for Graphics(SMGRA) which accelerates memory access time for graphics operations. The SMGRA has a rectangular array memory architecture which has already proposed by Whelan to process pixels in the rectangle area simultaneously, but the SMGRA should improve address decoding time and reduce the number of address pins by using address multiplexing scheme. The SMGRA has a Z-value comparator in the DRAM which is to convert read-modify-write Z buffer into single-write only operation that improves approximately 50% frame buffer access bandwidth.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

A Study on the Improvement of Frame Memory Interface of MPEG-2 Video Encoder (MPEG-2 비디오 부호화기의 프레임 메모리 인터페이스 개선에 관한 연구)

  • 이인섭;임순자;김환용
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.2
    • /
    • pp.211-218
    • /
    • 2001
  • In this paper, we propose the structure of utilizing the memory map, which is using not conventional DRAM but SDRAM, for the hardware implementation of frame memory interface module to the video encoder. As reducing the size of memory map and interface buffer within the same bus, the hardware complexity is improved and the hardware size is minimized as simplifying the interface logic. The conventional system is wasted access time, because of accessing randomly stored data in order to store and output the memories in macro-block unit. therefore the method, which is proposed in this paper, can be effectively reducing the access time of memory, because of the data is stored and processed by line unit.

  • PDF

Register-Based Parallel Pipelined Scheme for Synchronous DRAM (동기식 기억소자를 위한 레지스터를 이용한 병렬 파이프라인 방식)

  • Song, Ho Jun
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.12
    • /
    • pp.108-114
    • /
    • 1995
  • Recently, along wtih the advance of high-performance system, synchronous DRAM's (SDRAM's) which provide consecutive data output synchronized with an external clock signal, have been reported. However, in the conventional SDRAM's which utilize a multi-stage serial pipelined scheme, the column path is divided into multi-stages depending on CAS latency N. Thus, as the operating speed and CAS latency increase, new stages must be added, thereby causing a large area penalty due to additinal latches and I/O lines. In the proposed register-based parallel pipelined scheme, (N-1) registers are located between the read data bus line pair and the data output buffer and the coming data are sequentially stored. Since the column data path is not divided and the read data is directly transmitted to the registers, the busrt read operation can easily be achieved at higher frequencies without a large area penalty and degradation of internal timing margin. Simulation results for 0.32um-Tech. 4-Bank 64M SDRAM show good operation at 200MHz and an area increment is less than 0.1% when CAS latency N is increased from 3 to 4.. This pipelined scheme is more advantageous as the operating frequency increases.

  • PDF

Performance Evaluation and Analysis of NVM Storage for Ultra-Light Internet of Things (초경량 사물인터넷을 위한 비휘발성램 스토리지 성능평가 및 분석)

  • Lee, Eunji;Yoo, Seunghoon;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.181-186
    • /
    • 2015
  • With the rapid growth of semiconductor technologies, small-sized devices with powerful computing abilities are becoming a reality. As this environment has a limit on power supply, NVM storage that has a high density and low power consumption is preferred to HDD or SSD. However, legacy software layers optimized for HDDs should be revisited. Specifically, as storage performance approaches DRAM performance, existing I/O mechanisms and software configurations should be reassessed. This paper explores the challenges and implications of using NVM storage with a broad range of experiments. We measure the performance of a system with NVM storage emulated by DRAM with proper timing parameters and compare it with that of HDD storage environments under various configurations. Our experimental results show that even with storage as fast as DRAM, the performance gain is not large for read operations as current I/O mechanisms do a good job hiding the slow performance of HDD. To assess the potential benefit of fast storage media, we change various I/O configurations and perform experiments to quantify the effects of existing I/O mechanisms such as buffer caching, read-ahead, synchronous I/O, direct I/O, block I/O, and byte-addressable I/O on systems with NVM storage.

Design Optimization Techniques for the SSD Controller (SSD 컨트롤러 최적 설계 기법)

  • Yi, Doo-Jin;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.8
    • /
    • pp.45-52
    • /
    • 2011
  • Flash memory is becoming widely prevalent in various area due to high performance, non-volatile features, low power, and robust durability. As price-per-bit is decreased, NAND flash based SSDs (Solid State Disk) have been attracting attention as the next generation storage device, which can replace HDDs (Hard Disk Drive) which have mechanical properties. Especially for the single package SSD, if channel number or FIFO buffer size per channel increases to improve performance, the size of a controller and I/O pin count will increase linearly with channel numbers and form factor will be affected. We propose a novel technique which can minimize form factor by optimizing the number of NAND flash channels and the size of interface FIFO buffer in the SSD. For SSD with 10 channel and double buffer, the experimental results show that buffer block size can be reduced about 73% without performance degradation and total size of a controller can be reduced about 40% because control block per channel and I/O pin count decrease according to decrease channel number.

Preparation and Properties of Field Effect Transistor with (Bi,La)$Ti_3O_12/$ Ferroelectric Materials ((Bi,La)$Ti_3O_12/$ 강유전체 물질을 갖는 전계효과형 트랜지스터의 제작과 특성연구)

  • 서강모;조중연;장호정
    • Proceedings of the Materials Research Society of Korea Conference
    • /
    • 2003.11a
    • /
    • pp.180-180
    • /
    • 2003
  • FRAM (Ferroelectric Random Access Memory)은 DRAM(Dynamic Random Access Memory)in 커패시터 재료을 상유전체 물질에서 강유전체 물질로 대체하여 전원 공급이 차단되어도 정보를 기억할 수 있고, 데이터의 고속처리가 가능하고 저소비전력과 집적화가 뛰어난 차세대 메모리 소자이다. 본 연구에서는 n-Well/P-Si(100) 기판위에 $Y_2$O$_3$ 박막을 중간층 (buffer layer)으로 사용하여 (Bi,La) Ti$_3$O$_{12}$ (BLT) 강유전체 박막을 졸-겔 방법으로 형성하여 MFM(I)S(Metal Ferroelectric Metal (Insulation) Silicon) 구조의 커패시터 및 전계효과형 트랜지스터(Field Effect Transistor) 소자를 제작하였다. 제작된 소자에 대해 형상학적, 전기적 특성을 조사, 분석하였다.

  • PDF

An Efficient Wear-Leveling Algorithm for NAND Flash SSD with Multi-Channel and Multi-Way Architecture (멀티채널과 멀티웨이 구조의 NAND 플래시 SSD를 위한 효율적인 웨어레벨링 알고리듬)

  • Kim, Dong-Ho;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.7
    • /
    • pp.425-432
    • /
    • 2014
  • This paper proposes a wear-leveling algorithm that exploits the properties of SSD memories with multi-channel and multi-way architecture. When a write request arrives, the proposed algorithm classifies the stored data in DRAM buffer into hot or cold according to logical address access frequency, and performs data allocation to reduce deviation of block erase counts. It lowers the chance of increasing erase count by allocating cold data to blocks which have high erase count. Effectiveness of the proposed algorithm is verified by executing various applications on a multi-channel, multi-way SSD simulator. Experimental results show that differences in erase count among blocks is reduced by an average of 9.3%, and total erase count decreases by 4.6%, when compared to previous wear-leveling algorithm.

VLSI Design of a 2048 Point FFT/IFFT by Sequential Data Processing for Digital Audio Broadcasting System (순차적 데이터 처리방식을 이용한 디지틀 오디오 방송용 2048 Point FFT/IFFT의 VLSI 설계)

  • Choe, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.5
    • /
    • pp.65-73
    • /
    • 2002
  • In this paper, we propose and verify an implementation method for a single-chip 2048 complex point FFT/IFFT in terms of sequential data processing. For the sequential processing of 2048 complex data, buffers to store the input data are necessary. Therefore, DRAM-like pipelined commutator architecture is used as a buffer. The proposed structure brings about the 60% chip size reduction compared with conventional approach by using this design method. The 16-point FFT is a basic building block of the entire FFT chip, and the 2048-point FFT consists of the cascaded blocks with five stages of radix-4 and one stage of radix-2. Since each stage requires rounding of the resulting bits while maintaining the proper S/N ratio, the convergent block floating point (CBFP) algorithm is used for the effective internal bit rounding and their method contributed to a single chip design of digital audio broadcasting system.