• Title/Summary/Keyword: Cache Technique

Search Result 150, Processing Time 0.024 seconds

A Review of Web Cache Prefetching

  • Deng, YuFeng;Manoharan, Sathiamoorthy
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.3
    • /
    • pp.161-167
    • /
    • 2014
  • Web caches help to reduce latencies arising from slow networks through storing and reusing what was used before. Repeat access to a cached resource does not incur network latencies. However, resources that have never been used will not be found in the cache. Cache prefetching is a technique that helps to fill a cache with still-unused resources in anticipation that these resources will be used in the near future. Typically these unused resources are related to the resources that have been accessed in the recent past. While web caching exploits temporal locality, prefetching attempts to exploit spatial locality. Access to the prefetched resources will be cache hits, and therefore reduces the latency as perceived by the user. This paper reviews the cache infrastructure supported by the hypertext transfer protocol and discusses web cache prefetching in general, including Mozilla's prefetching infrastructure. It then classifies and reviews some prefetching techniques.

Delay Reduction by Providing Location Based Services using Hybrid Cache in peer to peer Networks

  • Krishnan, C. Gopala;Rengarajan, A.;Manikandan, R.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.6
    • /
    • pp.2078-2094
    • /
    • 2015
  • Now a days, Efficient processing of Broadcast Queries is of critical importance with the ever-increasing deployment and use of mobile technologies. BQs have certain unique characteristics that the traditional spatial query processing in centralized databases does not address. In novel query processing technique, by maintaining high scalability and accuracy, latency is reduced considerably in answering BQs. Novel approach is based on peer-to-peer sharing, which enables us to process queries without delay at a mobile host by using query results cached in its neighboring mobile peers. We design and evaluate cooperative caching techniques to efficiently support data access in ad hoc networks. We first propose two schemes: Cache Data, which caches the data, and Cache Path, which caches the data path. After analyzing the performance of those two schemes, we propose a hybrid approach (Hybrid Cache), which can further improve the performance by taking advantage of Cache Data and Cache Path while avoiding their weaknesses. Cache replacement policies are also studied to further improve the performance. Simulation results show that the proposed schemes can significantly reduce the query delay and message complexity when compared to other caching schemes.

Bus Splitting Techniques for MPSoC to Reduce Bus Energy (MPSoC 플랫폼의 버스 에너지 절감을 위한 버스 분할 기법)

  • Chung Chun-Mok;Kim Jin-Hyo;Kim Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.699-708
    • /
    • 2006
  • Bus splitting technique reduces bus energy by placing modules with frequent communications closely and using necessary bus segments in communications. But, previous bus splitting techniques can not be used in MPSoC platform, because it uses cache coherency protocol and all processors should be able to see the bus transactions. In this paper, we propose a bus splitting technique for MPSoC platform to reduce bus energy. The proposed technique divides a bus into several bus segments, some for private memory and others for shared memory. So, it minimizes the bus energy consumed in private memory accesses without producing cache coherency problem. We also propose a task allocation technique considering cache coherency protocol. It allocates tasks into processors according to the numbers of bus transactions and cache coherence protocol, and reduces the bus energy consumption during shared memory references. The experimental results from simulations say the bus splitting technique reduces maximal 83% of the bus energy consumption by private memory accesses. Also they show the task allocation technique reduces maximal 30% of bus energy consumed in shared memory references. We can expect the bus splitting technique and the task allocation technique can be used in multiprocessor platforms to reduce bus energy without interference with cache coherency protocol.

Design and Implementation of Raw File System for Web Cache Server (웹 캐시 서버를 위한 저수준 파일시스템 설계 및 구현)

  • Kim Seong-Rak;Koo Young-Wan
    • Journal of Internet Computing and Services
    • /
    • v.4 no.2
    • /
    • pp.11-19
    • /
    • 2003
  • The technique which stores cache data in EXT2 or UFS designed for general purpose is not suitable for satisfying the speed required for web cache due to the general purpose file system. This study shows that there is the better solution by optimizing the file system using the characteristics of web file. It is impossible that the suggested RawCFS changes the size of cached object and the access authentication, and this results from the existence of up-to-dated object in the original server. This file system is proved in the capability test that it is faster than the technique by 40% which stores in each file by object unit. This can be used in the design of high end web server such as shoppingmall or Internet Broadcasting station which should provide objects like image or HTML as well as cache server to the client for the fast service.

  • PDF

Bounding Worst-Case Data Cache Performance by Using Stack Distance

  • Liu, Yu;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.3 no.4
    • /
    • pp.195-215
    • /
    • 2009
  • Worst-case execution time (WCET) analysis is critical for hard real-time systems to ensure that different tasks can meet their respective deadlines. While significant progress has been made for WCET analysis of instruction caches, the data cache timing analysis, especially for set-associative data caches, is rather limited. This paper proposes an approach to safely and tightly bounding data cache performance by computing the worst-case stack distance of data cache accesses. Our approach can not only be applied to direct-mapped caches, but also be used for set-associative or even fully-associative caches without increasing the complexity of analysis. Moreover, the proposed approach can statically categorize worst-case data cache misses into cold, conflict, and capacity misses, which can provide useful insights for designers to enhance the worst-case data cache performance. Our evaluation shows that the proposed data cache timing analysis technique can safely and accurately estimate the worst-case data cache performance, and the overestimation as compared to the observed worst-case data cache misses is within 1% on average.

KDBcs-Tree : An Efficient Cache Conscious KDB-Tree for Multidimentional Data (KDBcs-트리 : 캐시를 고려한 효율적인 KDB-트리)

  • Yeo, Myung-Ho;Min, Young-Soo;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.34 no.4
    • /
    • pp.328-342
    • /
    • 2007
  • We propose a new cache conscious indexing structure for processing frequently updated data efficiently. Our proposed index structure is based on a KDB-Tree, one of the representative index structures based on space partitioning techniques. In this paper, we propose a data compression technique and a pointer elimination technique to increase the utilization of a cache line. To show our proposed index structure's superiority, we compare our index structure with variants of the CR-tree(e.g. the FF CR-tree and the SE CR-tree) in a variety of environments. As a result, our experimental results show that the proposed index structure achieves about 85%, 97%, and 86% performance improvements over the existing index structures in terms of insertion, update and cache-utilization, respectively.

Texture Cache with Automatical Index Splitting Based on Texture Size (텍스처의 크기에 따라 인덱스를 자동 분할하는 텍스처 캐시)

  • Kim, Jin-Woo;Park, Young-Jin;Kim, Young-Sik;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.57-68
    • /
    • 2008
  • Texture Mapping is a technique for adding realism to an image in 3D graphics Chip. Bilinear filtering mode of this technique needs accesses of 4 texels to process one pixel. In this paper we analyzed the access pattern of texture, and proposed the high performance texture cache which can access 4 texels simultaneously. We evaluated using simulation results of 3D game(Quake 3, Unreal Tournament 2004). Simulation results show that proposed texture cache has high performance on the case where physical size is less then or equal 8KBytes.

  • PDF

Recognition Time Reduction Technique for the Time-synchronous Viterbi Beam Search (시간 동기 비터비 빔 탐색을 위한 인식 시간 감축법)

  • 이강성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.46-50
    • /
    • 2001
  • This paper proposes a new recognition time reduction algorithm Score-Cache technique, which is applicable to the HMM-base speech recognition system. Score-Cache is a very unique technique that has no other performance degradation and still reduces a lot of search time. Other search reduction techniques have trade-offs with the recognition rate. This technique can be applied to the continuous speech recognition system as well as the isolated word speech recognition system. W9 can get high degree of recognition time reduction by only replacing the score calculating function, not changing my architecture of the system. This technique also can be used with other recognition time reduction algorithms which give more time reduction. We could get 54% of time reduction at best.

  • PDF

A new warp scheduling technique for improving the performance of GPUs by utilizing MSHR information (GPU 성능 향상을 위한 MSHR 정보 기반 워프 스케줄링 기법)

  • Kim, Gwang Bok;Kim, Jong Myon;Kim, Cheol Hong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.3
    • /
    • pp.72-83
    • /
    • 2017
  • GPUs can provide high throughput with latency hiding by executing many warps in parallel. MSHR(Miss Status Holding Registers) for L1 data cache tracks cache miss requests until required data is serviced from lower level memory. In recent GPUs, excessive requests for cache resources cause underutilization problem of GPU resources due to cache resource reservation fails. In this paper, we propose a new warp scheduling technique to reduce stall cycles under MSHR resource shortage. Cache miss rates for each warp is predicted based on the observation that each warp shows similar cache miss rates for long period. The warps showing low miss rates or computation-intensive warps are given high priority to be issued when MSHR is full status. Our proposal improves GPU performance by utilizing cache resource more efficiently based on cache miss rate prediction and monitoring the MSHR entries. According to our experimental results, reservation fail cycles can be reduced by 25.7% and IPC is increased by 6.2% with the proposed scheduling technique compared to loose round robin scheduler.

Efficient Document Replacement Policy by Web Site Popularity

  • Han, Jun-Tak
    • International Journal of Contents
    • /
    • v.3 no.1
    • /
    • pp.14-18
    • /
    • 2007
  • General replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. But, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. In this paper, we present the document replacement policies with regard to the popularity of each web site, which are suitable for modern network environments to enhance the hit-ratio and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.