• Title/Summary/Keyword: Memory Mapping

Search Result 215, Processing Time 0.023 seconds

Block Unit Mapping Technique of NAND Flash Memory Using Variable Offset

  • Lee, Seung-Woo;Ryu, Kwan-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.9-17
    • /
    • 2019
  • In this paper, we propose a block mapping technique applicable to NAND flash memory. In order to use the NAND flash memory with the operating system and the file system developed on the basis of the hard disk which is mainly used in the general PC field, it is necessary to use the system software known as the FTL (Flash Translation Layer). FTL overcomes the disadvantage of not being able to overwrite data by using the address mapping table and solves the additional features caused by the physical structure of NAND flash memory. In this paper, we propose a new mapping method based on the block mapping method for efficient use of the NAND flash memory. In the case of the proposed technique, the data modification operation is processed by using a blank page in the existing block without using an additional block for the data modification operation, thereby minimizing the block unit deletion operation in the merging operation. Also, the frequency of occurrence of the sequential write request and random write request Accordingly, by optimally adjusting the ratio of pages for recording data in a block and pages for recording data requested for modification, it is possible to optimize sequential writing and random writing by maximizing the utilization of pages in a block.

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

An Effective Memory Mapping Function for CMAC Controller (CMAC 제어기를 위한 효과적인 메모리 매핑 함수)

  • Kwon, H.Y.;Bien, Z.;Suh, I.H.
    • Proceedings of the KIEE Conference
    • /
    • 1989.11a
    • /
    • pp.488-493
    • /
    • 1989
  • In this paper, the structure of CMAC address mapping is first revisited, and the address hashing function and the random mapping is discussed in the conventional CMAC implementation. Then the effective size of CMAC memory is derived from the modulus property of the CMAC address vector, and a new hashing function for the effective memory mapping is proposed for a CMAC implementation with feasible memory size and no troublesome random mapping. Finally, the performance of the conventional CMAC learning algorithm and that of the proposed new CMAC scheme arc compared via simulations.

  • PDF

Algorithmic GPGPU Memory Optimization

  • Jang, Byunghyun;Choi, Minsu;Kim, Kyung Ki
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.4
    • /
    • pp.391-406
    • /
    • 2014
  • The performance of General-Purpose computation on Graphics Processing Units (GPGPU) is heavily dependent on the memory access behavior. This sensitivity is due to a combination of the underlying Massively Parallel Processing (MPP) execution model present on GPUs and the lack of architectural support to handle irregular memory access patterns. Application performance can be significantly improved by applying memory-access-pattern-aware optimizations that can exploit knowledge of the characteristics of each access pattern. In this paper, we present an algorithmic methodology to semi-automatically find the best mapping of memory accesses present in serial loop nest to underlying data-parallel architectures based on a comprehensive static memory access pattern analysis. To that end we present a simple, yet powerful, mathematical model that captures all memory access pattern information present in serial data-parallel loop nests. We then show how this model is used in practice to select the most appropriate memory space for data and to search for an appropriate thread mapping and work group size from a large design space. To evaluate the effectiveness of our methodology, we report on execution speedup using selected benchmark kernels that cover a wide range of memory access patterns commonly found in GPGPU workloads. Our experimental results are reported using the industry standard heterogeneous programming language, OpenCL, targeting the NVIDIA GT200 architecture.

A Study on Improvement of Low-power Memory Architecture in IoT/edge Computing (IoT/에지 컴퓨팅에서 저전력 메모리 아키텍처의 개선 연구)

  • Cho, Doosan
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.1
    • /
    • pp.69-77
    • /
    • 2021
  • The widely used low-cost design methodology for IoT devices is very popular. In such a networked device, memory is composed of flash memory, SRAM, DRAM, etc., and because it processes a large amount of data, memory design is an important factor for system performance. Therefore, each device selects optimized design factors such as function, performance and cost according to market demand. The design of a memory architecture available for low-cost IoT devices is very limited with the configuration of SRAM, flash memory, and DRAM. In order to process as much data as possible in the same space, an architecture that supports parallel processing units is usually provided. Such parallel architecture is a design method that provides high performance at low cost. However, it needs precise software techniques for instruction and data mapping on the parallel architecture. This paper proposes an instruction/data mapping method to support optimized parallel processing performance. The proposed method optimizes system performance by actively using hardware and software parallelism.

Index block mapping for flash memory system (플래쉬 메모리 시스템을 위한 인덱스 블록 매핑)

  • Lee, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.8
    • /
    • pp.23-30
    • /
    • 2010
  • Flash memory is non-volatile and can retain data even after system is powered off. Besides, it has many other features such as fast access speed, low power consumption, attractive shock resistance, small size, and light-weight. As its price decreases and capacity increases, the flash memory is expected to be widely used in consumer electronics, embedded systems, and mobile devices. Flash storage systems generally adopt a software layer, called FTL. In this research, we proposed a new FTL mechanism for overcoming the major drawback of conventional block mapping algorithm. In addition to the block mapping table, a index block mapping table with a small size is used to indicate sector location. The proposed indexed block mapping algorithm by adding a small size. By the simulation result, the proposed FTL provides an enhanced speed than a conventional hybrid mapping algorithm by around 45% in average, and the requirement of mapping memory is also reduced by around 12%.

Mapping Cache for High-Performance Memory Mapped File I/O in Memory File Systems (메모리 파일 시스템 기반 고성능 메모리 맵 파일 입출력을 위한 매핑 캐시)

  • Kim, Jiwon;Choi, Jungsik;Han, Hwansoo
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.524-530
    • /
    • 2016
  • The desire to access data faster and the growth of next-generation memories such as non-volatile memories, contribute to the development of research on memory file systems. It is recommended that memory mapped file I/O, which has less overhead than read-write I/O, is utilized in a high-performance memory file system. Memory mapped file I/O, however, brings a page table overhead, which becomes one of the big overheads that needs to be resolved in the entire file I/O performance. We find that same overheads occur unnecessarily, because a page table of a file is removed whenever a file is opened after being closed. To remove the duplicated overhead, we propose the mapping cache, a technique that does not delete a page table of a file but saves the page table to be reused when the mapping of the file is released. We demonstrate that mapping cache improves the performance of traditional file I/O by 2.8x and web server performance by 12%.

A Memory Mapping Technique to Reduce Data Retrieval Cost in the Storage Consisting of Multi Memories (다중 메모리로 구성된 저장장치에서 데이터 탐색 비용을 줄이기 위한 메모리 매핑 기법)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.19-24
    • /
    • 2023
  • Recently, with the recent rapid development of memory technology, various types of memory are developed and are used to improve processing speed in data management systems. In particular, NAND flash memory is used as a main media for storing data in memory-based storage devices because it has a nonvolatile characteristic that it can maintain data even at the power off state. However, since the recently studied memory-based storage device consists of various types of memory such as MRAM and PRAM as well as NAND flash memory, research on memory management technology is needed to improve data processing performance and efficiency of media in a storage system composed of different types of memories. In this paper, we propose a memory mapping scheme thought technique for efficiently managing data in the storage device composed of various memories for data management. The proposed idea is a method of managing different memories using a single mapping table. This method can unify the address scheme of data and reduce the search cost of data stored in different memories for data tiering.

The Efficient Memory Mapping of FPGA Implemenation for Real-Time 2-D Discrete Wavelet Transform using Mallat tree algorithm (Mallat tree 방법을 이용한 실시간 2-D DWT의 FPGA 구현을 위한 효율적인 메모리 사상)

  • 김왕현;서영호;김종현;김동욱
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.105-108
    • /
    • 2001
  • This paper proposed an efficient memory scheduling method (E$^2$M$^2$) by which the real-time image compression using 2-dimensional discrete wavelet transform(2-D DWT) is possible in an FPGA chip. In this paper, we assumed that the 2-D DWT was performed as the Mallat-tree. After the memory mapping method was proved in software, the memory controller was designed for an commercial SDRAM IC.

  • PDF

Index management technique using Small block in storage device based on NAND flash memory

  • Lee, Seung-Woo;Oh, Se-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.10
    • /
    • pp.1-14
    • /
    • 2020
  • In this paper, we propose to solve the problem of increasing system memory usage due to an increase in the number of mapping information management when using a NAND flash memory-based storage device in an existing sector-based file system. The proposed technique is to store only mapping information in page units based on index blocks and manage them in block units. To this end, the proposed technique uses a sequential offset for storing and managing a plurality of mapping information in one page in a small block, and a reverse offset for a spare page corresponding to a change in mapping information in the block. Through this, the proposed technique has the advantage that the number of block-unit deletions is less than that of the existing technique, and the system memory usage required for mapping information management is low. Reduced by about 32%.