• Title/Summary/Keyword: Cache Technique

Search Result 150, Processing Time 0.029 seconds

A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices (이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법)

  • Park, SeJin;Park, ChanIk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.3
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

A Study on the Data Retrieval By Using a Cache Forward/Backward Technique (캐쉬 Forward/Backward기법을 이용한 데이터 검색에 관한 연구)

  • Kim Soo-Jang
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2003.08a
    • /
    • pp.229-233
    • /
    • 2003
  • 최근, 인터넷 사용자가 급증하면서 빠른 서비스에 대한 문제가 큰 관심이 되고있다. 특히 데이터베이스 시스템에서 저장 삭세 수정 등은 사용자에게 긴 대기시간을 요구할 수도 있기 때문에 사용자의 불평을 야기할 수 있다. 이 논문에서는 3-티어 방식에서 요즘 많이 사용되는 application server의 cache에 대해서 말하고자 한다. 기존 application server는 데이터를 application server cache에 저장하여 같은 데이터를 서비스할 경우 server의 cache를 사용하지만 이 논문에서 제안하는 것은 접속된 client를 관리하여 각각의 client에 cache를 만들고 application server나 또는 데이터베이스 server가 서비스를 하지 못할 경우는 가장 최근의 데이터를 가지고 있는 client를 찾아 client cache에 있는 데이터를 서비스 하자는 것이다.

  • PDF

A Low-Power Texture Mapping Technique for Mobile 3D Graphics (모바일 3D 그래픽스를 위한 저전력 텍스쳐 맵핑 기법)

  • Kim, Hyun-Hee;Kim, Ji-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.45-57
    • /
    • 2009
  • ETexture mapping is a technique used for adding reality to an image in 3D graphics. However. this technique becomes the bottleneck of the 3D graphics pipeline because it requires large processing power and high memory bandwidth. For reducing memory latency in texture mapping, texture cache is used. As portable devices become smaller and they have power constraint, it is important to reduce the area and the power consumption of the texture cache. In this paper we propose using a small texture cache to reduce the area and the power consumption of the texture cache. Furthermore, we propose techniques to keep a performance comparable to large texture caches by using prefetch techniques and a victim cache. Simulation results show the proposed small texture cache can reduce the area and the power consumption up to 70% and 60%, respectively, by using $1{\sim}2K$ bytes texture cache compared to the conventional 16K bytes cache while keeping the performance.

Research on Web Cache Infection Methods and Countermeasures (웹 캐시 감염 방법 및 대응책 연구)

  • Hong, Sunghyuck;Han, Kun-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.17-22
    • /
    • 2019
  • Cache is a technique that improves the client's response time, thereby reducing the bandwidth and showing an effective side. However, there are vulnerabilities in the cache technique as well as in some techniques. Web caching is convenient, but it can be exploited by hacking and cause problems. Web cache problems are mainly caused by cache misses and excessive cache line fetch. If the cache miss is high and excessive, the cache will become a vulnerability, causing errors such as transforming the secure data and causing problems for both the client and the system of the user. If the user is aware of the cache infection and the countermeasure against the error, the user will no longer feel the cache error or the problem of the infection occurrence. Therefore, this study proposed countermeasures against four kinds of cache infections and errors, and suggested countermeasures against web cache infections.

A Web Cache Replacement Technique of the Divided Scope Base that Considered a Size Reference Characteristics of Web Object

  • Seok, Ko-Il
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.335-339
    • /
    • 2003
  • We proposed a Web cache replacement technique of a divided scope base that considered a size reference characteristics of a Web object for efficient operation of a Web base system and, in this study, analyzed performance of the replacement technique that proposed it though an experiment. We analyzed a reference characteristics of size to occur by a user reference characteristics through log analysis of a Web Base system in an experiment. And we divide storage scope of a cache server as its analysis result and tested this replacement technique based n divided scope. The proposed technique has a flexibility about a change of a reference characteristics of a user. Also, experiment result, we compared it with LRU and the LRUMIN which were an existing replacement technique and confirmed an elevation of an object hit ratio.

  • PDF

Low Power Scheme Using Bypassing Technique for Hybrid Cache Architecture

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.10-15
    • /
    • 2021
  • Cache bypassing schemes have been studied to remove unnecessary updating the data in cache blocks. Among them, a statistics-based cache bypassing method for asymmetric-access caches is one of the most efficient approach for non-voliatile memories and shows the lowest cache access latency. However, it is proposed under the condition of the normal cache system, so further study is required for the hybrid cache architecture. This paper proposes a novel cache bypassing scheme, called hybrid bypassing block selector. In the proposal, the new model is established considering the SRAM region and the non-volatile memory region separately. Based on the model, hybrid bypassing decision block is implemented. Experiments show that the hybrid bypassing decision block saves overall energy consumption by 21.5%.

A Hybrid Prefix Cashing Scheme for Efficient IP Address Lookup

  • Kim, Jinsoo;Kim, Junghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.45-52
    • /
    • 2015
  • We propose a hybrid prefix caching scheme to enable high speed IP address lookup. All prefixes loaded in a prefix cache should not be overlapped in address range for correct IP lookup. So, every non-leaf prefix needs to be expanded not so as to be overlapped. The shorter expanded prefix is more preferable because it can cover wider address range just as an single entry in a prefix cache. We exploits advantages of two dynamic prefix expansion techniques, bounded prefix expansion technique and bitmap-based prefix expansion technique. The proposed scheme uses dual bound values whereas just one bound value is used in bounded prefix expansion. Our elaborated technique make the dual bound values be associated with several subtries flexibly using bitmap information, rather than with fixed subtries. We evaluate the performance of the proposed scheme in terms of the average length of the expanded prefixes and cache miss ratio. The experiment results show the proposed scheme has lower cache miss ratio than other previous schemes including both bounded prefix expansion and bitmap-based expansion irrespective of the cache size.

Hybrid Scheme of Data Cache Design for Reducing Energy Consumption in High Performance Embedded Processor (고성능 내장형 프로세서의 에너지 소비 감소를 위한 데이타 캐쉬 통합 설계 방법)

  • Shim, Sung-Hoon;Kim, Cheol-Hong;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.166-177
    • /
    • 2006
  • The cache size tends to grow in the embedded processor as technology scales to smaller transistors and lower supply voltages. However, larger cache size demands more energy. Accordingly, the ratio of the cache energy consumption to the total processor energy is growing. Many cache energy schemes have been proposed for reducing the cache energy consumption. However, these previous schemes are concerned with one side for reducing the cache energy consumption, dynamic cache energy only, or static cache energy only. In this paper, we propose a hybrid scheme for reducing dynamic and static cache energy, simultaneously. for this hybrid scheme, we adopt two existing techniques to reduce static cache energy consumption, drowsy cache technique, and to reduce dynamic cache energy consumption, way-prediction technique. Additionally, we propose a early wake-up technique based on program counter to reduce penalty caused by applying drowsy cache technique. We focus on level 1 data cache. The hybrid scheme can reduce static and dynamic cache energy consumption simultaneously, furthermore our early wake-up scheme can reduce extra program execution cycles caused by applying the hybrid scheme.

New Drowsy Cashing Method by Using Way-Line Prediction Unit for Low Power Cache (저전력 캐쉬를 위한 웨이-라인 예측 유닛을 이용한 새로운 드로시 캐싱 기법)

  • Lee, Jung-Hoon
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.10 no.2
    • /
    • pp.74-79
    • /
    • 2011
  • The goal of this research is to reduce dynamic and static power consumption for a low power cache system. The proposed cache can achieve a low power consumption by using a drowsy and a way prediction mechanism. For reducing the static power, the drowsy technique is used at 4-way set associative cache. And for reducing the dynamic energy, one among four ways is selectively accessed on the basis of information in the Way-Line Prediction Unit (WLPU). This prediction mechanism does not introduce any additional delay though prediction misses are occurred. The WLPU can effectively reduce the performance overhead of the conventional drowsy caching by waking only a drowsy cache line and one way in advance. Our results show that the proposed cache can reduce the power consumption by about 40% compared with the 4-way drowsy cache.

  • PDF

Workload Characteristics-based L1 Data Cache Switching-off Mechanism for GPUs

  • Do, Thuan Cong;Kim, Gwang Bok;Kim, Cheol Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.10
    • /
    • pp.1-9
    • /
    • 2018
  • Modern graphics processing units (GPUs) have become one of the most attractive platforms in exploiting high thread level parallelism with the support of new programming tools such as CUDA and OpenCL. Recent GPUs has applied cache hierarchy to support irregular memory access patterns; however, L1 data cache (L1D) exhibits poor efficiency in the GPU. This paper shows that the L1D does not always positively affect the applications in terms of performance and energy efficiency for the GPU. The performance of the GPU is even harmed by using the L1D for lots of applications. Our proposed technique exploits the characteristics of the currently-executed applications to predict the performance impact of the L1D on the GPU and then decides whether to continuously use the cache for the application or not. Our experimental results show that the proposed technique improves the GPU performance by 9.4% and saves up to 52.1% of the power consumption in the L1D.