• Title/Summary/Keyword: cache-hit

Search Result 172, Processing Time 0.026 seconds

A Web Cache Replacement Technique of the Divided Scope Base that Considered a Size Reference Characteristics of Web Object

  • Seok, Ko-Il
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.335-339
    • /
    • 2003
  • We proposed a Web cache replacement technique of a divided scope base that considered a size reference characteristics of a Web object for efficient operation of a Web base system and, in this study, analyzed performance of the replacement technique that proposed it though an experiment. We analyzed a reference characteristics of size to occur by a user reference characteristics through log analysis of a Web Base system in an experiment. And we divide storage scope of a cache server as its analysis result and tested this replacement technique based n divided scope. The proposed technique has a flexibility about a change of a reference characteristics of a user. Also, experiment result, we compared it with LRU and the LRUMIN which were an existing replacement technique and confirmed an elevation of an object hit ratio.

  • PDF

An Efficient Flash Translation Layer Considering Temporal and Spacial Localities for NAND Flash Memory Storage Systems

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.9-15
    • /
    • 2017
  • This paper presents an efficient FTL for NAND flash based SSDs. Address translation information of page mapping based FTLs is stored on flash memory pages and address translation cache keeps frequently accessed entries. The proposed FTL of this paper reduces response time by considering both of temporal and spacial localities of page access patterns in translation cache management. The localities of several well-known traces are evaluated and determine the structure of the cache for high hit ratio. A simulation with several well-known traces shows that the presented FTL reduces response time in comparison to previous FTLs and can be used with relatively small size of caches.

User Centric Cache Allocation Schemes in Infrastructure Wireless Mesh Networks (인프라스트럭처 무선 메쉬 네트워크에서 사용자 중심 캐싱 할당 기법)

  • Jeon, Seung Hyun
    • Journal of Industrial Convergence
    • /
    • v.17 no.4
    • /
    • pp.131-137
    • /
    • 2019
  • In infrastructure wireless mesh networks (WMNs), in order to improve mobile users' satisfaction for the given cache hit ratio, we investigate an User centric Cache Allocation (UCA) scheme while reducing cache cost in a mesh router (MR) and expected transmission time (ETT) for content search in cache. To minimize ETT values of mobile users, a genetic algorithm based UCA (GA-UCA) scheme is provided. The goal is to maximize mobile users' satisfaction via our well defined utility, which considers content popularity and the number of mobile users. Finally, through solving optimization problem we show the optimal cache can be allocated for UCA and GA-UCA. Besides, a WMN provider can find the optimal number of mobile users for user centric cache allocation in infrastructure WMNs.

Performance Improvement of SVLIW Architectures by Removing LNOPs from An Object Code (목적 코드에서 LNOP 코드가 제거됨에 따른 SVLIW 구조의 성능 향상)

  • Jeong, Bo-Yun;Jeon, Joong-Nam;Kim, Suk-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.9
    • /
    • pp.2269-2279
    • /
    • 1997
  • SVLIW (Superscalar VLIW) processor, a family of VLIW processors schedules very long instruction words at runtime. If a very long instruction word that is to be issued occurs data dependence relations and/or resource conflicts with those words that were under execution, a long NOP word is issued instead of the word until all the data dependence relations and/or resource conflicts have been resolved. Thus, LNOPs can be removed in object codes for SVLIW processors. In this paper, we measure an improvement of the cache hit ratio caused by removing LNOPs in the object code. We also analyze an improvement of the processor performance due to higher cache hit ratio of the processor. Benchmark tests promise that the performance of SVLIW processors is improved more than 5% compared with that of traditional VLIW processors.

  • PDF

Forecasting Load Balancing Method by Prediction Hot Spots in the Shared Web Caching System

  • Jung, Sung-C.;Chong, Kil-T.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2137-2142
    • /
    • 2003
  • One of the important performance metrics of the World Wide Web is how fast and precise a request from users will be serviced successfully. Shared Web Caching (SWC) is one of the techniques to improve the performance of the network system. In Shared Web Caching Systems, the key issue is on deciding when and where an item is cached, and also how to transfer the correct and reliable information to the users quickly. Such SWC distributes the items to the proxies which have sufficient capacity such as the processing time and the cache sizes. In this study, the Hot Spot Prediction Algorithm (HSPA) has been suggested to improve the consistent hashing algorithm in the point of the load balancing, hit rate with a shorter response time. This method predicts the popular hot spots using a prediction model. The hot spots have been patched to the proper proxies according to the load-balancing algorithm. Also a simulator is developed to utilize the suggested algorithm using PERL language. The computer simulation result proves the performance of the suggested algorithm. The suggested algorithm is tested using the consistent hashing in the point of the load balancing and the hit rate.

  • PDF

Cache Management using a Adaptive Parity Group Configuration in RAID 5 Controller (적응형 패리티 그룹 구성을 이용한 RAID 5 제어기에서의 캐시 운영)

  • Huh, Jung-Ho;Song, Ja-Young;Chang, Tae-Mu
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.83-92
    • /
    • 2003
  • RAID 5 is a widely-used technique used to construct disk systems of high reliability and performance. This paper proposes APGOC (Adaptive Parity Group On Cache) organization on cache to solve "small write" problem of RAID 5 especially in OLTP (On-Line Transaction Processing System) environments. In our approach, when user process makes a request for a file to kernel, the information on the read/write characteristics is added to the file data structure of the file system. With this information, data and parity cache can be managed interchangeably through parity fetching. Therefore we can enhance the cache utilization and improve the disk request response time. Our method is analyzed and evaluated with a simulation method. Comparing with previous works, we observed about 6~l3% of performance enhancement.hancement.

An Energy Efficient and High Performance Data Cache Structure Utilizing Tag History of Cache Addresses (캐시 주소의 태그 이력을 활용한 에너지 효율적 고성능 데이터 캐시 구조)

  • Moon, Hyun-Ju;Jee, Sung-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.55-62
    • /
    • 2007
  • Uptime of embedded processors for mobile devices are dependent on battery consumption. Especially the large portion of power consumption is known to be due to cache management in embedded processors. This paper proposes an energy efficient data cache structure for high performance embedded processors. High performance prefetching data cache issues prefetching instructions before issuing demand-fetch instructions based on reference predictions. These prefetching instruction bring reduction on memory delay by improving cache hit ratio, but on the other hand those increase energy consumption in proportion to the number of prefetching instructions. In this paper, we adopt tag history table on prefetching data cache for reducing energy consumption by minimizing parallel tag comparison. Experimental results show the proposed data cache improves performance on energy consumption as well as memory delay.

Popularity Based Cache Replacement Scheme to Enhance Performance in Content Centric Networks (콘텐츠 중심 네트워크에서 성능 향상을 위한 인기도 기반 캐시 교체 기법)

  • Woo, Taehee;Park, Heungsoon;Kim, Hogil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2151-2159
    • /
    • 2015
  • Unlike an existing IP routing methods, the Content Centric Network(CCN) is new networking paradigm to find the contents by content name. The CCN can effectively process the content requested repeatedly by the user because of the cache which can be storing a content. This paper proposes a popularity based cache replacement scheme. The proposed scheme improves the hit rate better than a existing scheme. Accordingly reducing the load of server and Round Trip Time(RTT).

A Design and Performance Analysis of Web Cache Replacement Policy Based on the Size Heterogeneity of the Web Object (웹 객체 크기 이질성 기반의 웹 캐시 대체 기법의 설계와 성능 평가)

  • Na Yun Ji;Ko Il Seok;Cho Dong Uk
    • The KIPS Transactions:PartC
    • /
    • v.12C no.3 s.99
    • /
    • pp.443-448
    • /
    • 2005
  • Efficient using of the web cache is becoming important factors that decide system management efficiency in the web base system. The cache performance depends heavily on the replacement algorithm which dynamically selects a suitable subset of objects for caching in a finite cache space. In this paper, the web caching algorithm is proposed for the efficient operation of the web base system. The algorithm is designed based on a divided scope that considered size reference characteristic and heterogeneity on web object. With the experiment environment, the algorithm is compared with conservative replacement algorithms, we have confirmed more than $15\%$ of an performance improvement.

A Modified LRU Page Replacement Policy with LMF for Web Proxy Cache (LMF로 수정된 웹 프락시 캐쉬용 LRU페이지 교체 정책)

  • 이용임;김주균
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.7_8
    • /
    • pp.426-433
    • /
    • 2003
  • Management policies of Web Proxy Cache, for the QoS of Web users, are mainly focused on the page replacement and the data consistency policy. But the two subjects have been studied independently to each other regardless of its possibility of cooperation. In this paper, we introduce the performance improvement obtained by adapting the characteristic of LMF used in data consistency policy to LRU, thus taking the better performance synergy as a result of complementary cooperation. Various policies for the management of Web Proxy Cache are in progress, this study can be a way of performance guide to increase cache hit ratio and reduce the transmission overhead of Web Server.