• Title/Summary/Keyword: Memory performance

Search Result 3,146, Processing Time 0.031 seconds

An Efficient Index Buffer Management Scheme for a B+ tree on Flash Memory (플래시 메모리상에 B+트리를 위한 효율적인 색인 버퍼 관리 정책)

  • Lee, Hyun-Seob;Joo, Young-Do;Lee, Dong-Ho
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.719-726
    • /
    • 2007
  • Recently, NAND flash memory has been used for a storage device in various mobile computing devices such as MP3 players, mobile phones and laptops because of its shock-resistant, low-power consumption, and none-volatile properties. However, due to the very distinct characteristics of flash memory, disk based systems and applications may result in severe performance degradation when directly adopting them on flash memory storage systems. Especially, when a B-tree is constructed, intensive overwrite operations may be caused by record inserting, deleting, and its reorganizing, This could result in severe performance degradation on NAND flash memory. In this paper, we propose an efficient buffer management scheme, called IBSF, which eliminates redundant index units in the index buffer and then delays the time that the index buffer is filled up. Consequently, IBSF significantly reduces the number of write operations to a flash memory when constructing a B-tree. We also show that IBSF yields a better performance on a flash memory by comparing it to the related technique called BFTL through various experiments.

Cache Sensitive T-tree Index Structure (캐시를 고려한 T-트리 인덱스 구조)

  • Lee Ig-hoon;Kim Hyun Chul;Hur Jae Yung;Lee Snag-goo;Shim JunHo;Chang Juho
    • Journal of KIISE:Databases
    • /
    • v.32 no.1
    • /
    • pp.12-23
    • /
    • 2005
  • In the past decade, advances in speed of commodity CPUs have iu out-paced advances in memory latency Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. To reduce memory access latency, cache memory incorporated in the memory subsystem. but cache memories can reduce the memory latency only when the requested data is found in the cache. This mainly depends on the memory access pattern of the application. At this point, previous research has shown that B+ trees perform much faster than T-trees because B+ trees are more cache conscious than T-trees, and also proposed 'Cache Sensitive B+trees' (CSB. trees) that are more cache conscious than B+trees. The goal of this paper is to make T-trees be cache conscious as CSB-trees. We propose a new index structure called a 'Cache Sensitive T-trees (CST-trees)'. We implemented CST-trees and compared performance of CST-trees with performance of other index structures.

A New Flash Memory Package Structure with Intelligent Buffer System and Performance Evaluation (버퍼 시스템을 내장한 새로운 플래쉬 메모리 패키지 구조 및 성능 평가)

  • Lee Jung-Hoon;Kim Shin-Dug
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.75-84
    • /
    • 2005
  • This research is to design a high performance NAND-type flash memory package with a smart buffer cache that enhances the exploitation of spatial and temporal locality. The proposed buffer structure in a NAND flash memory package, called as a smart buffer cache, consists of three parts, i.e., a fully-associative victim buffer with a small block size, a fully-associative spatial buffer with a large block size, and a dynamic fetching unit. This new NAND-type flash memory package can achieve dramatically high performance and low power consumption comparing with any conventional NAND-type flash memory. Our results show that the NAND flash memory package with a smart buffer cache can reduce the miss ratio by around 70% and the average memory access time by around 67%, over the conventional NAND flash memory configuration. Also, the average miss ratio and average memory access time of the package module with smart buffer for a given buffer space (e.g., 3KB) can achieve better performance than package modules with a conventional direct-mapped buffer with eight times(e.g., 32KB) as much space and a fully-associative configuration with twice as much space(e.g., 8KB)

Designing Hybrid HDD using SLC/MLC combined Flash Memory (SLC/MLC 혼합 플래시 메모리를 이용한 하이브리드 하드디스크 설계)

  • Hong, Seong-Cheol;Shin, Dong-Kun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.7
    • /
    • pp.789-793
    • /
    • 2010
  • Recently, flash memory-based non-volatile cache (NVC) is emerging as an effective solution to enhance both I/O performance and energy consumption of storage systems. To get significant performance and energy gains by NVC, it would be better to use multi-level-cell (MLC) flash memories since it can provide a large capacity of NVC with low cost. However, the number of available program/erase cycles of MLC flash memory is smaller than that of single-level-cell (SLC) flash memory limiting the lifespan of NVC. To overcome such a limitation, SLC/MLC combined flash memory is a promising solution for NVC. In this paper, we propose an effective management scheme for heterogeneous SLC and MLC regions of the combined flash memory.

Efficient Management of PCM-based Swap Systems with a Small Page Size

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.476-484
    • /
    • 2015
  • Due to the recent advances in non-volatile memory technologies such as PCM, a new memory hierarchy of computer systems is expected to appear. In this paper, we explore the performance of PCM-based swap systems and discuss how this system can be managed efficiently. Specifically, we introduce three management techniques. First, we show that the page fault handling time can be reduced by attaching PCM on DIMM slots, thereby eliminating the software stack overhead of block I/O and the context switch time. Second, we show that it is effective to reduce the page size and turn off the read-ahead option under the PCM swap system where the page fault handling time is sufficiently small. Third, we show that the performance is not degraded even with a small DRAM memory under a PCM swap device; this leads to the reduction of DRAM's energy consumption significantly compared to HDD-based swap systems. We expect that the result of this paper will lead to the transition of the legacy swap system structure of "large memory - slow swap" to a new paradigm of "small memory - fast swap."

Sampling-based Block Erase Table in Wear Leveling Technique for Flash Memory

  • Kim, Seon Hwan;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, flash memory has been in a great demand from embedded system sectors for storage devices. However, program/erase (P/E) cycles per block are limited on flash memory. For the limited number of P/E cycles, many wear leveling techniques are studied. They prolonged the life time of flash memory using information tables. As one of the techniques, block erase table (BET) method using a bit array table was studied for embedded devices. However, it has a disadvantage in that performance of wear leveling is sharply low, when the consumption of memory is reduced. To solve this problem, we propose a novel wear leveling technique using Sampling-based Block Erase Table (SBET). SBET relates one bit of the bit array table to each block by using exclusive OR operation with round robin function. Accordingly, SBET enhances accuracy of cold block information and can prevent to decrease the performance of wear leveling. In our experiment, SBET prolongs life time of flash memory by up to 88%, compared with previous techniques which use a bit array table.

Fast and Memory Efficient Method for Optimal Concurrent Fault Simulator (동시 고장 시뮬레이터의 메모리효율 및 성능 향상에 대한 연구)

  • 김도윤;김규철
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.719-722
    • /
    • 1998
  • Fault simulation for large and complex sequential circuits is highly cpu-intensive task in the intergrated circuit design process. In this paper, we propose CM-SIM, a concurrent fault simulator which employs an optimal memory management strategy and simple list operations. CM-SIM removes inefficiencies and uses new dynamic memory management strategies, using contiguous array memory. Consequently, we got improved performance and reduced memory usage in concurrent fault simulation.

  • PDF

A NEW LIMITED MEMORY QUASI-NEWTON METHOD FOR UNCONSTRAINED OPTIMIZATION

  • Moghrabi, Issam A.R.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.7 no.1
    • /
    • pp.7-14
    • /
    • 2003
  • The main concern of this paper is to develop a new class of quasi-newton methods. These methods are intended for use whenever memory space is a major concern and, hence, they are usually referred to as limited memory methods. The methods developed in this work are sensitive to the choice of the memory parameter ${\eta}$ that defines the amount of past information stored within the Hessian (or its inverse) approximation, at each iteration. The results of the numerical experiments made, compared to different choices of these parameters, indicate that these methods improve the performance of limited memory quasi-Newton methods.

  • PDF

Vectorization of an Explicit Finite Element Method on Memory-to-Memory Type Vector Computer (Memory-to-Memory방식 벡터컴퓨터에서의 외연적 유한요소법의 벡터화)

  • 이지호;이재석
    • Computational Structural Engineering
    • /
    • v.4 no.1
    • /
    • pp.95-108
    • /
    • 1991
  • An explicit finite element method can be executed more rapidly and effectively on vector computer than on the scalar computer because it has suitable structures for vector processing. In this paper, an efficient vectorization method of the explicit finite element program on the memory-to-memory type vector computer is proposed. First, the general vectorization method which can be applied regardless of the vector architecture is investigated, then the method which is suitable for the memory-to-memory type vector computer is proposed. To illustrate the usefulness of the proposed vectorization method, DYNA3D, the existing explicit finite element program, is migrated on HDS AS/XL V50 which is the memory-to-memory type vector computer. Performance results on actual test show a vector/scalar speedup is above 2.4.

  • PDF

The Architecture of the Flash Memory Storage System using Page Delete Information (페이지 삭제정보를 활용하는 플래시 저장장치의 구조)

  • Jung, Ho-Young;Park, Sung-Min;Kang, Soo-Yong;Cha, Jae-Hyuk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.958-962
    • /
    • 2009
  • Flash memory, which replaces hard disk recently, has different physical characteristics with hard disk. For the performance of flash memory based storage system, many researches over OS and file system layers has been doing. In this paper, we propose the architecture of flash memory based storage which uses information of page invalidation when file deletion occurs from upper layer. Also, we evaluate the performance of proposed system. Proposed system effectively increases IO performance by using page invalidation information to block merge and wear leveling algorithms.