• Title/Summary/Keyword: SSD cache

Search Result 45, Processing Time 0.035 seconds

A Prefetching and Memory Management Policy for Personal Solid State Drives (개인용 SSD를 위한 선반입 및 메모리 관리 정책)

  • Baek, Sung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.19A no.1
    • /
    • pp.35-44
    • /
    • 2012
  • Traditional technologies that are used to improve the performance of hard disk drives show many negative cases if they are applied to solid state drives (SSD). Access time and block sequence in hard disk drives that consist of mechanical components are very important performance factors. Meanwhile, SSD provides superior random read performance that is not affected by block address sequence due to the characteristics of flash memory. Practically, it is recommended to disable prefetching if a SSD is installed in a personal computer. However, this paper presents a combinational method of a prefetching scheme and a memory management that consider the internal structure of SSD and the characteristics of NAND flash memory. It is important that SSD must concurrently operate multiple flash memory chips. The I/O unit size of NAND flash memory tends to increase and it exceeded the block size of operating systems. Hence, the proposed prefetching scheme performs in an operating unit of SSD. To complement a weak point of the prefetching scheme, the proposed memory management scheme adaptively evicts uselessly prefetched data to maximize the sum of cache hit rate and prefetch hit rate. We implemented the proposed schemes as a Linux kernel module and evaluated them using a commercial SSD. The schemes improved the I/O performance up to 26% in a given experiment.

Block Replacement Scheme based on Reuse Interval for Hybrid SSD System (Hybrid SSD 시스템을 위한 재사용 간격 기반 블록 교체 기법)

  • Yoo, Sanghyun;Kim, Kyung Tae;Youn, Hee Yong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.5
    • /
    • pp.19-27
    • /
    • 2015
  • Due to the advantages of fast read/write operation and low power consumption, SSD(Solid State Drive) is now widely adopted as storage device of smart phone, laptop computer, server, etc. However, the shortcomings of SSD such as limited number of write operations and asymmetric read/write operation lead to the problem of shortened life span of SSD. Therefore, the block replacement policy of SSD used as cache for HDD is very important. The existing solutions for improving the lifespan of SSD including the LARC scheme typically employ the LRU algorithm to manage the SSD blocks, which may increase the miss rate in SSD due to the replacement of frequently used block instead of rarely used block. In this paper we propose a novel block replacement scheme which considers the block reuse interval to effectively handle various data read/write patterns. The proposed scheme replaces the block in SSD based on the recency decided by reuse interval and age along with hit ratio. Computer simulation using workload trace files reveals that the proposed scheme consistently improves the performance and lifespan of SSD by increasing the hit ratio and decreasing the number of write operations compared to the existing schemes including LARC.

A Multi-Level Flash Translation Layer for Large Capacity Solid State Drives

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.11-18
    • /
    • 2021
  • The flash translation layer(FTL) of SSD maps the logical page number requested from the host to the actual recorded flash memory page number. It is very important to reduce the amount of RAM used to manage the mapping information. In the existing demand-based FTLs, two-level method is applied in which mapping information is also recorded in flash memory pages and only their addresses are managed as a table in RAM. As the capacities of SSDs are growing to tens of terabytes, the amount of RAM for mapping table becomes too large. In this paper, ML-FTL was proposed as a method of managing mapping information in three levels to reduce the amount of RAM required drastically. From an evaluation, the increase in overhead was minimal compared to the conventional two-level method by properly utilizing cache.

Energy Conservation of RAID by Exploiting SSD Cache (SSD 캐시를 이용한 RAID의 에너지 절감 기법)

  • Lee, Hyo-J.;Kim, Eun-Sam;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.237-241
    • /
    • 2010
  • Energy conservation of server systems has become important. Though storage subsystem is one of the biggest power consumers, development of energy conservation techniques is challenging problem due to striping techniques like RAID and physical characteristics of hard disks. According to our observation, the size of footprint for a day or for hours is much smaller compared to the size of whole data set. In this paper, we describe a design of a novel architecture for RAID that uses an SSD as a large cache to conserve energy by holding such a footprint. We incorporate these approaches into a real implementation of a RAID 5 system that consists of four hard disks and an SSD in a Linux environment. Our preliminary results in actual performance measurements using the cello99 and SPC traces show that energy consumption is reduced by a maximum of 14%.

Performance Optimization in GlusterFS on SSDs (SSD 환경 아래에서 GlusterFS 성능 최적화)

  • Kim, Deoksang;Eom, Hyeonsang;Yeom, Heonyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.95-100
    • /
    • 2016
  • In the current era of big data and cloud computing, the amount of data utilized is increasing, and various systems to process this big data rapidly are being developed. A distributed file system is often used to store the data, and glusterFS is one of popular distributed file systems. As computer technology has advanced, NAND flash SSDs (Solid State Drives), which are high performance storage devices, have become cheaper. For this reason, datacenter operators attempt to use SSDs in their systems. They also try to install glusterFS on SSDs. However, since the glusterFS is designed to use HDDs (Hard Disk Drives), when SSDs are used instead of HDDs, the performance is degraded due to structural problems. The problems include the use of I/O-cache, Read-ahead, and Write-behind Translators. By removing these features that do not fit SSDs which are advantageous for random I/O, we have achieved performance improvements, by up to 255% in the case of 4KB random reads, and by up to 50% in the case of 64KB random reads.

Enhancing Distributed File System Performance Using SSD Cache (SSD 캐시를 이용한 분산파일시스템의 성능 향상)

  • Kim, Chei-Yol;Park, Jeong-Sook;Kim, Young-Chang;Kim, Young-Kyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.83-86
    • /
    • 2014
  • 분산 파일시스템의 클라이언트 측에 SSD 장치를 캐시장치로 사용하여 분산파일시스템에 부족한 랜덤 입출력 성능을 향상시키고, Back-end 데이터 서버의 부하를 줄일 수 있다. 본 논문은 국내에서 개발된 분산파일시스템인 MAHA-FS의 클라이언트 측에 읽기 캐시로 SSD 장치를 지원함으로써 캐시 히트시에 읽기 성능을 향상 시킬 수 있음과 더불어 읽기 캐시의 기능 추가로 인한 쓰기 성능의 저하가 없음을 보여준다. 본 논문에서 제안한 SSD 캐시를 이용하여 분산파일시스템의 활용 분야을 넓힐 수 있을 것으로 기대한다.

Mechanism to Select the Data Source of HDFS with SSD Cache Based on Storage I / O Cost (SSD 캐시를 적용한 HDFS의 I/O 비용 기반 데이터 선택 기법)

  • Kim, Minkyung;Shin, Mincheol;Park, Sanghyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.676-679
    • /
    • 2015
  • 빅데이터 분석을 위한 Hadoop 환경에서 고성능 저장장치인 SSD에 대한 중요성이 증가하면서 일반적으로 사용되는 저장장치인 HDD와 혼합하여 사용하는 연구들이 주목 받고 있다. 특히 SSD를 HDD의 캐시로 사용했을 때 저장장치에 대한 I/O 성능을 향상할 수 있다는 연구 결과들이 있다. 본 연구는 이를 바탕으로 SSD를 HDD의 캐시로 사용한다. HDFS는 저장장치에 접근하여 I/O를 수행하는데 기존에는 로컬 서버에서 캐시 미스가 발생한 경우 로컬 HDD로 접근한다. 이러한 방식은 접근하는 데이터에 따라 SSD의 높은 Bandwidth를 활용하지 못하게 되는 경우를 발생시키고 그 결과 특정 서버의 I/O 지연으로 전체 분산 처리의 성능을 저하시킬 수 있다. 이를 해결하기 위해 본 연구는 HDFS 레벨에서 로컬 서버의 HDD와 데이터 복제본들이 저장된 원격 서버의 SSD에서 I/O를 수행하는 경우에 대해 수식을 통해 비용을 비교한다. 그 결과 항상 기대 성능이 높은 저장 장치를 선택하여 데이터를 읽어오게 함으로써 기존 방식보다 성능이 개선될 수 있음을 입증한다.

A Transaction Level Simulator for Performance Analysis of Solid-State Disk (SSD) in PC Environment (PC향 SSD의 성능 분석을 위한 트랜잭션 수준 시뮬레이터)

  • Kim, Dong;Bang, Kwan-Hu;Ha, Seung-Hwan;Chung, Sung-Woo;Chung, Eui-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.12
    • /
    • pp.57-64
    • /
    • 2008
  • In this paper, we propose a system-level simulator for the performance analysis of a Solid-State Disk (SSD) in PC environment by using TLM (Transaction Level Modeling) method. Our method provides quantitative analysis for a variety of architectural choices of PC system as well as SSD. Also, it drastically reduces the analysis time compared to the conventional RTL (Register Transfer Level) modeling method. To show the effectiveness of the proposed simulator, we performed several explorations of PC architecture as well as SSD. More specifically, we measured the performance impact of the hit rate of a cache buffer which temporarily stores the data from PC. Also, we analyzed the performance variation of SSD for various NAND Flash memories which show different response time with our simulator. These experimental results show that our simulator can be effectively utilized for the architecture exploration of SSD as well as PC.

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

A Study of HDD Performance Improvement through Filter Driver & NAND FLASH Memory (Filter Driver 와 NAND FLASH Memory를 이용한 HDD 장치의 성능 개선에 관한 연구)

  • Kim, Jae-Kyung;Kim, Woo-Gil;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1635-1641
    • /
    • 2011
  • In this paper, we research the method for HDD I/O Performance improvement by Filter Driver & NAND FLASH Memory. This paper was started from NAND Flash Memory can not be replaced by HDD because of high cost. So We consider that using NAND Flash Memory as cache for HDD. It can be achieved high HDD Performance through Filter Driver by low cost.