• Title/Summary/Keyword: 캐시 성능

Search Result 407, Processing Time 0.026 seconds

Performance Relationship Analysis in Map Block Number of NAND Flash Storage Device Using Map Cache Techniques (맵 캐시 기법을 사용하는 낸드 플래시 저장장치의 맵 블록 개수에 따른 성능 관계 분석)

  • Lee, Daeyong;Song, Yong Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.22-25
    • /
    • 2016
  • 맵 캐시 기법을 사용하는 낸드 플래시 저장장치는 맵 데이터를 저장하기 위한 공간을 필요로 한다. 이 공간을 맵 블록이라 브르며 시스템 유지 및 성능 개선을 위해 사용되는 낸드 블록의 일부를 점유한다. 맵 블록의 개수가 너무 많을 경우 시스템 유지에 블록이 부족해지기 때문에 전반적인 성능이 하락하게 된다. 하지만 맵 블록이 너무 적은 경우에도 전체 맵 데이터를 유지하기 위한 동작이 과도하게 수행되어 성능이 크게 하락하는 문제가 발생한다. 본 논문은 맵 블록 개수에 따른 성능 변화를 분석하고 최적의 맵 블록 개수를 제안한다.

SSD Caching based De-Duplication for Virtualization Environment (가상화 환경을 위한 SSD 캐시 기반의 중복 제거 기법)

  • Kang, Dong-Woo;Kim, Se-Woog;Lee, Nam-Su;Choi, Jong-Moo;Kim, Jun-Mo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06a
    • /
    • pp.293-295
    • /
    • 2012
  • 가상화 기술은 물리적 컴퓨팅 자원을 사용자에게 논리적으로 제공하여 시스템의 효율성을 높이고 유연성을 향상시키기 위한 기술로 서버 통합이나 아마존 EC2와 같은 클라우드 컴퓨팅 환경에서 사용되고 있다. 이러한 가상화 환경에서는 다수의 가상머신들의 동시적인 I/O 수행으로 인해 저장장치에 병목현상이 발생된다. 또한 각 가상머신들의 중복된 데이터들을 저장하기 위해 불필요한 쓰기 비용이 발생하여 시스템의 성능 저하가 발생하게 된다. 본 논문에서는 이러한 가상화 환경에서의 I/O비용을 감소시키기 위해 SSD를 캐시로 사용하는 중복 제거 기법을 제안한다. 제안된 기법은 중복된 데이터를 제거하여 불필요한 디스크에 대한 I/O 수행을 감소시키며, 중복 발생 패턴의 특성을 고려하여 SSD의 빠른 쓰기 성능을 효과적으로 사용할 수 있는 캐시 구조 모델을 통해 가상화 환경에서 I/O 성능을 향상 시킬 수 있음을 보인다.

A Design and Performance Analysis of Web Cache Replacement Policy Based on the Size Heterogeneity of the Web Object (웹 객체 크기 이질성 기반의 웹 캐시 대체 기법의 설계와 성능 평가)

  • Na Yun Ji;Ko Il Seok;Cho Dong Uk
    • The KIPS Transactions:PartC
    • /
    • v.12C no.3 s.99
    • /
    • pp.443-448
    • /
    • 2005
  • Efficient using of the web cache is becoming important factors that decide system management efficiency in the web base system. The cache performance depends heavily on the replacement algorithm which dynamically selects a suitable subset of objects for caching in a finite cache space. In this paper, the web caching algorithm is proposed for the efficient operation of the web base system. The algorithm is designed based on a divided scope that considered size reference characteristic and heterogeneity on web object. With the experiment environment, the algorithm is compared with conservative replacement algorithms, we have confirmed more than $15\%$ of an performance improvement.

Performance Analysis of Parity Cache enabled RAID Level 5 for DDR Memory Storage Device (패리티 캐시를 이용한 DDR 메모리 저장 장치용 RAID 레벨 5의 성능 분석)

  • Gu, Bon-Gen;Kwak, Yun-Sik;Cheong, Seung-Kook;Hwang, Jung-Yeon
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.6
    • /
    • pp.916-927
    • /
    • 2010
  • In this paper, we analyze the performance of the parity cache enabled RAID level-5 via the simulation. This RAID system consists of the DDR memory-based storage devices. To do this, we develop the simulation model and suggest the basic performance analysis data which we want to get via the simulation. And we implement the simulator based on the simulation model and execute the simulator. From the result of the simulation, we expect that the parity cache enabled RAID level-5 configured by the DDR memory based storage devices has the positive effectiveness to the enhancing of the storage system performance if the storage access patterns of applications are tuned.

Forgetting based File Cache Management Scheme for Non-Volatile Memory (데이터 망각을 활용한 비휘발성 메모리 기반 파일 캐시 관리 기법)

  • Kang, Dongwoo;Choi, Jongmoo
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.972-978
    • /
    • 2015
  • Non-volatile memory (NVM) supports both byte addressability and non-volatility. These characteristics make it feasible for NVM to be employed at any layer of the memory hierarchy such as cache, memory and disk. An interesting characteristic of NVM is that, even though it supports non-volatility, its retention capability is limited. Furthermore NVM has tradeoff between its retention capability and write latency. In this paper, we propose a novel NVM-based file cache management scheme that makes use of the limited retention capability to improve the cache performance. Experimental results with real-workloads show that our scheme can reduce access latency by up to 31% (24.4% average) compared with the conventional LRU based cache management scheme.

Dynamic Directory Table: On-Demand Allocation of Directory Entries for Active Shared Cache Blocks (동적 디렉터리 테이블 : 공유 캐시 블록의 디렉터리 엔트리 동적 할당)

  • Bae, Han Jun;Choi, Lynn
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1245-1251
    • /
    • 2017
  • In this study we present a novel directory architecture that can dynamically allocate a directory entry for a cache block on demand at runtime only when the block is shared by more than one core. Thus, we do not maintain coherence for private blocks, substantially reducing the number of directory entries. Even for shared blocks, we allocate directory entry dynamically only when the block is actively shared, further reducing the number of directory entries at runtime. For this, we propose a new directory architecture called dynamic directory table (DDT), which is implemented as a cache of active directory entries. Through our detailed simulation on PARSEC benchmarks, we show that DDT can outperform the expensive full-map directory by a slight margin with only 17.84% of directory area across a variety of different workloads. This is achieved by its faster access and high hit rates in the small directory. In addition, we demonstrate that even smaller DDTs can give comparable or higher performance compared to recent directory optimization schemes such as SPACE and DGD with considerably less area.

Performance Analysis of Caching Instructions on SVLIW Processor and VLIW Processor (SVLIW 프로세서와 VLIW 프로세서의 명령어 캐싱에 따른 성능 분석)

  • Ji, Sung-Hyun;Park, No-Kwang;Kim, Suk-Il
    • Journal of IKEEE
    • /
    • v.1 no.1 s.1
    • /
    • pp.101-110
    • /
    • 1997
  • SVLIW processor architectures can resolve resource collisions and data dependencies between the instructions while scheduling VLIW instructions at run-time. As a result, long NOP word instructions can be removed from the object code produced for the processor. Thus, the occurrence of cache misses on the SVLIW processor would be lesser than that on the same cache size VLIW processor. Less frequent cache misses on the SVLIW processor would incur less frequent memory access, and thus, the total execution cycles to complete an application would be shortened compared with cases on the VLIW processor. Such a feature eventually compromises effects of longer instruction pipeline stages than those of the VLIW processor. In this paper, we formulate and compare two execution cycle models of the two architectures. A simulation results show that the longer memory access cycles when cache miss occurs, the total execution cycles of SVLIW processor would be shorter than those of VLIW processor.

  • PDF

Hybrid Main Memory based Buffer Cache Scheme by Using Characteristics of Mobile Applications (모바일 애플리케이션의 특성을 이용한 하이브리드 메모리 기반 버퍼 캐시 정책)

  • Oh, Chansoo;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1314-1321
    • /
    • 2015
  • Mobile devices employ buffer cache mechanisms, just as in computer systems such as desktops or servers, to mitigate the performance gap between main memory and secondary storage. However, DRAM has a problem in that it accelerates battery consumption by performing refresh operations periodically to maintain the stored data. In this paper, we propose a novel buffer cache scheme to increase the battery lifecycle in mobile devices based on a hybrid main memory architecture consisting of DRAM and non-volatile PCM. We also suggest a new buffer cache policy that allocates buffers based on process states to optimize the performance and endurance of PCM. In particular, our algorithm allocates each page to the appropriate position corresponding to the state of the application that owns the page, and tries to ensure a rapid response of foreground applications even with a small amount of DRAM memory. The experimental results indicate that the proposed scheme reduces the elapsed time of foreground applications by 58% on average and power consumption by 23% on average without negatively impacting the performance of background applications.

A Cache Manager for Enhancing the Performance of Query Evaluation in Data Warehousing Environment (데이타웨어하우스 환경에서의 질의 처리 성능 향상을 위한 캐시 관리자)

  • 심준호
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.408-419
    • /
    • 2003
  • Data warehouses are usually dedicated to the processing of quires issued by decision support system(DSS). The response time of DSS queries is typically several orders of magnitude higher than the one of OLTP queries. Since DSS queries are often submitted interactively, techniques for reducing their response time are important. The caching of query results is one such technique particularly well suited to the DSS environment. In this paper, we present a cache manager for such an environment. Specifically, we define a canonical form of query. The cache manager looks up a query based on the exact query match or using a suggested query split process if the query is found is non-canonical form or in canonical form, respectively. It dynamically maintains the cache content by employing a profit function which reflects in an integrated manner the query execution cost, the size of query result, the reference rate, the maintenance cost of each result due to updates of their base tables, and the frequency of such updates. We performed the experimental evaluation and it positively shows the performance benefit of our cache manager.

A Comparative Study on Off-Path Content Access Schemes in NDN (NDN에서 Off-Path 콘텐츠 접근기법들에 대한 성능 비교 연구)

  • Lee, Junseok;Kim, Dohyung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.12
    • /
    • pp.319-328
    • /
    • 2021
  • With popularization of services for massive content, the fundamental limitations of TCP/IP networking were discussed and a new paradigm called Information-centric networking (ICN) was presented. In ICN, content is addressed by the content identifier (content name) instead of the location identifier such as IP address, and network nodes can use the cache to store content in transit to directly service subsequent user requests. As the user request can be serviced from nearby network caches rather than from far-located content servers, advantages such as reduced service latency, efficient usage of network bandwidth, and service scalability have been introduced. However, these advantages are determined by how actively content stored in the cache can be utilized. In this paper, we 1) introduce content access schemes in Named-data networking, one of the representative ICN architectures; 2) in particular, review the schemes that allow access to cached content away from routing paths; 3) conduct comparative study on the performance of the schemes using the ndnSIM simulator.