• Title/Summary/Keyword: Cache Update

Search Result 61, Processing Time 0.023 seconds

Bitmap-based Prefix Caching for Fast IP Lookup

  • Kim, Jinsoo;Ko, Myeong-Cheol;Nam, Junghyun;Kim, Junghwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.873-889
    • /
    • 2014
  • IP address lookup is very crucial in performance of routers. Several works have been done on prefix caching to enhance the performance of IP address lookup. Since a prefix represents a range of IP addresses, a prefix cache shows better performance than an IP address cache. However, not every prefix is cacheable in itself. In a prefix cache it causes false hit to cache a non-leaf prefix because there is possibly the longer matching prefix in the routing table. Prefix expansion techniques such as complete prefix tree expansion (CPTE) make it possible to cache the non-leaf prefixes as the expanded forms, but it is hard to manage the expanded prefixes. The expanded prefixes sometimes incur a great deal of update overhead in a routing table. We propose a bitmap-based prefix cache (BMCache) to provide low update overhead as well as low cache miss ratio. The proposed scheme does not have any expanded prefixes in the routing table, but it can expand a non-leaf prefix using a bitmap on caching time. The trace-driven simulation shows that BMCache has very low miss ratio in spite of its low update overhead compared to other schemes.

Asynchronous Cache Consistency Technique (비동기적 캐쉬 일관성 유지 기법)

  • 이찬섭
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.2
    • /
    • pp.33-40
    • /
    • 2004
  • According as client/server is generalized by development of computer performance and information communication technology, Servers uses local cache for extensibility and early response time, and reduction of limited bandwidth. Consistency of cached data need between server and client this time and much technique are proposed according to this. This Paper improved update frequency cache consistency in old. Existent consistency techniques is disadvantage that response time is late because synchronous declaration or abort step increases because delaying write intention declaration. Techniques that is proposed in this paper did to perform referring update time about object that page request or when complete update operation happens to solve these problem. Therefore, have advantage that response is fast because could run write intention declaration or update by sel_mode electively asynchronously when update operation consists and abort step decreases and clearer selection.

  • PDF

SSD Cache for RAID: Integrating Data Caching and Parity Update Delay (RAID를 위한 SSD 캐시: 데이터 캐싱과 패리티 갱신 지연 기법의 결합)

  • Minh, Sophal;Lee, Donghee
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.379-385
    • /
    • 2017
  • In enterprise environments, hybrid storage typically utilizes SSDs over disk-based RAID. Typically, SSDs over RAID are used as the data cache. Recently, the LeavO caching scheme was introduced to reduce the parity update overhead of the underlying RAID. In this paper, we combine the data caching and LeavO caching schemes and derive cost models of the combined cache to determine the optimal data and LeavO cache sizes. We also propose the Adaptive Combined Cache that dynamically adjusts the data cache and LeavO cache sizes for evolving workloads. Experimental results show that the performance of the Adaptive Combined Cache is significantly superior to that of the conventional data caching scheme and is comparable with that of the off-line optimal scheme.

Improvement of Partial Update for the Web Map Tile Service (실시간 타일 지도 서비스를 위한 타일이미지 갱신 향상 기법)

  • Cho, Sunghwan;Ga, Chillo;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.5
    • /
    • pp.365-373
    • /
    • 2013
  • Tile caching technology is a commonly used method that optimizes the delivery of map imagery across the internet in modern WebGIS systems. However the poor performance of the map tile cache update is one of the major causes that hamper the wider use of this technique for datasets with frequent updates. In this paper, we introduce a new algorithm, namely, Partial Area Cache Update (PACU) that significantly minimizes redundant update of map tiles where the update frequency of source map data is very large. The performance of our algorithm is verified with the cadastral map data of Pyeongtaek of Gyeonggi Province, where approximately 3,100 changes occur in a day among the 331,594 parcels. The experiment results show that the performance of the PACU algorithm is 6.6 times faster than the ESRI ArcGIS SERVER$^{(r)}$. This algorithm significantly contributes in solving the frequent update problem and enable Web Map Tile Services for data that requires frequent update.

KDBcs-Tree : An Efficient Cache Conscious KDB-Tree for Multidimentional Data (KDBcs-트리 : 캐시를 고려한 효율적인 KDB-트리)

  • Yeo, Myung-Ho;Min, Young-Soo;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.34 no.4
    • /
    • pp.328-342
    • /
    • 2007
  • We propose a new cache conscious indexing structure for processing frequently updated data efficiently. Our proposed index structure is based on a KDB-Tree, one of the representative index structures based on space partitioning techniques. In this paper, we propose a data compression technique and a pointer elimination technique to increase the utilization of a cache line. To show our proposed index structure's superiority, we compare our index structure with variants of the CR-tree(e.g. the FF CR-tree and the SE CR-tree) in a variety of environments. As a result, our experimental results show that the proposed index structure achieves about 85%, 97%, and 86% performance improvements over the existing index structures in terms of insertion, update and cache-utilization, respectively.

Efficient Deferred Incremental Refresh of XML Query Cache Using ORDBMS (ORDBMS를 사용한 XML 질의 캐쉬의 효율적인 지연 갱신)

  • Hwang Dae-Hyun;Kang Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.13D no.1 s.104
    • /
    • pp.11-22
    • /
    • 2006
  • As we are to deal with more and more XML documents, research on storing and managing XML documents in databases are actively conducted. Employing RDBMS or ORDBMS as a repository of XML documents is currently regarded as most practical. The query results out of XML documents stored in databases could be cached for query performance though the cost of cache consistency against the update of the underlying data is incurred. In this paper, we assume that an ORDBMS is used as a repository for the XML query cache as well as its underlying XML documents, and that XML query cache is refreshed in a deferred way with the update log. When the same XML document was updated multiple times, the deferred refresh of the XML query cache may Bet inefficient. We propose an algorithm that removes or filters such duplicate updates. Based on that, the optimal SQL statements that are to be executed for XML query cache consistency are generated. Through experiments, we show the efficiency of our proposed deferred refresh of XML query cache.

CL-Tree: B+ tree for NAND Flash Memory using Cache Index List (CL 트리: 낸드 플래시 시스템에서 캐시 색인 리스트를 활용하는 B+ 트리)

  • Hwang, Sang-Ho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.1-10
    • /
    • 2015
  • NAND flash systems require deletion operation and do not support in-place update, so the storage systems should use Flash Translation Layer (FTL). However, there are a lot of memory consumptions using mapping table in the FTL, so recently, many studies have been proposed to resolve mapping table overhead. These studies try to solve update propagation problem in the nand flash system which does not use mapping table. In this paper, we present a novel index structure, called CL-Tree(Cache List Tree), to solve the update propagation problem. The proposed index structure reduces write operations which occur for an update propagation, and it has a good performance for search operation because it uses multi-list structure. In experimental evaluation, we show that our scheme yields about 173% and 179% improvement in insertion speed and search speed, respectively, compared to traditional B+tree and other works.

Update Frequency Cache Consistency for Reducing Wait Time in Mobile Computing (이동 컴퓨팅 환경에서 대기 시간을 감소시키는 갱신 빈도 캐쉬 일관성 기법)

  • Lee, Chan-Seob;Kim, Dong-Hyuk;Baek, Joo-Hyun;Choi, Eui-In
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1017-1024
    • /
    • 2002
  • According as mobile computing environment is generalized by development of wireless networking technology and communication device, mobile host uses local cache for extensibility and early response time. and reduction of limited bandwidth. This time, between mobile host and mobile support station cache need consistency interested person of done data accordingly much techniqueses propose. Existent consistency techniques because detection based techniqueses are used mainly and broadcast periodic invalid message considering frequent disconnection. However, these techniqueses increase abort step through increase or delay of transmission message number by accuracy examination of data. Therefore, because mobile host deletes cached data, and extensity are decreased. Techniques that is proposed in this paper did to perform refering update frequency about object that page request or when complete update operation happens to solve these problem. Therefore, have advantage that response is fast because could run write intention declaration or update by update frequency electively asynchronously when update operation consists and abort step decreases. Also, improved extensity running delete or propagation electively according to update frequency about periodic invalid message gone since disconnection.

An XML Proxy Cache System for XML Documents with Update Locality in Shipbuilding Information Management System (조선정보관리시스템에서의 갱신의 지역편중성을 갖는 XML문서를 위한 XML 프록시 캐쉬 시스템)

  • Kim Nak Hyun;Lee Dong-Ho;Choi Il-Hwan;Kim Hyoung-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.5
    • /
    • pp.393-400
    • /
    • 2005
  • XML makes it possible to query the information created and managed different applications, which is impossible if expressed in other structure or language. In using shipbuilding information management system, there is inefficiency in storing and querying such a large XML document in XDBox. XML proxy cache system is suggested to improve that. In this paper, we suggests XML proxy cache system with thought of update locality found in using shipbuilding information management system.

A Comparative Analysis on Page Caching Strategies Affecting Energy Consumption in the NAND Flash Translation Layer (NAND 플래시 변환 계층에서 전력 소모에 영향을 미치는 페이지 캐싱 전략의 비교·분석)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.3
    • /
    • pp.109-116
    • /
    • 2018
  • SSDs that are not allowed in-place update within the allocated page cause another allocation of a new page that will replace the previous page at the moment data modification occurs. This intrinsic characteristic of SSDs requires many changes to the existing HDD-based IO theory. In this paper, we conduct a performance comparison of FTL caching strategy in perspective of cache hashing (Global vs. grouped) and caching algorithm (LRU vs. NUR) through a simulation. Experimental results show that in terms of energy consumption for flash operation the grouped management of cache is not suitable and NUR algorithm is superior to LRU algorithm. In particular, we found that the cache hit ratio of LRU algorithm is about 10% point higher than that of NUR algorithm while the energy consumption of LRU algorithm is about 32% high.