• Title/Summary/Keyword: LRU algorithm

Search Result 42, Processing Time 0.025 seconds

Web Caching using File Type (파일 타입을 이용한 웹 캐싱)

  • Lim, Jae-Hyun;Lee, Jun-Yeon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.6
    • /
    • pp.961-968
    • /
    • 2002
  • This paper proposes a new access method which is to considered the high variability in World Wide Web and manage the web cache space. Instead of using a single cache, we divide a cache and store all documents according to their file types. Proposed method was compares with current cache management policies using LFU, LRU and SIZE base algorithm. Using two different workload, we show the improvement hitting ratio and byte hitting ratio through simulating on the file type caching.

(PMU (Performance Monitoring Unit)-Based Dynamic XIP(eXecute In Place) Technique for Embedded Systems) (내장형 시스템을 위한 PMU (Performance Monitoring Unit) 기반 동적 XIP (eXecute In Place) 기법)

  • Kim, Dohun;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.158-166
    • /
    • 2008
  • These days, mobile embedded systems adopt flash memory capable of XIP feature since they can reduce memory usage, power consumption, and software load time. XIP provides direct access to ROM and flash memory for processors. However, using XIP incurs unnecessary degradation of applications' performance because direct access to ROM and flash memory shows more delay than that to main memory. In this paper, we propose a memory management framework, dynamic XIP, which can resolve the performance degradation of using XIP. Using a constrained RAM cache, dynamic XIP can dynamically change XIP region according to page access pattern to reduce performance degradation in execution time or energy consumption resulting from native XIP problem. The proposed framework consists of a page profiler gathering applications' memory access pattern using PMU and an XIP manager deciding that a page is accessed whether in main memory or in flash memory. The proposed framework is implemented and evaluated in Linux kernel. Our evaluation shows that our framework can reduce execution time at most 25% and energy consumption at most 22% compared with using XIP-only case adopted in general mobile embedded systems. Moreover, the evaluation shows that in execution time and energy consumption, our modified LRU algorithm with code page filters can reduce more than at most 90% and 80% respectively compared with applying just existing LRU algorithm to dynamic XIP.

  • PDF

WWCLOCK: Page Replacement Algorithm Considering Asymmetric I/O Cost of Flash Memory (WWCLOCK: 플래시 메모리의 비대칭적 입출력 비용을 고려한 페이지 교체 알고리즘)

  • Park, Jun-Seok;Lee, Eun-Ji;Seo, Hyun-Min;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.913-917
    • /
    • 2009
  • Flash memories have asymmetric I/O costs for read and write in terms of latency and energy consumption. However, the ratio of these costs is dependent on the type of storage. Moreover, it is becoming more common to use two flash memories on a system as an internal memory and an external memory card. For this reason, buffer cache replacement algorithms should consider I/O costs of device as well as possibility of reference. This paper presents WWCLOCK(Write-Weighted CLOCK) algorithm which directly uses I/O costs of devices along with recency and frequency of cache blocks to selecting a victim to evict from the buffer cache. WWCLOCK can be used for wide range of storage devices with different I/O cost and for systems that are using two or more memory devices at the same time. In addition to this, it has low time and space complexity comparable to CLOCK algorithm. Trace-driven simulations show that the proposed algorithm reduces the total I/O time compared with LRU by 36.2% on average.

A Cost-Based Buffer Replacement Algorithm in Object-Oriented Database Systems (객체지향 데이타베이스에서의 비용기반 버퍼 교체 알고리즘)

  • Park, Chong-Mok;Han, Wook-Shin;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.1-12
    • /
    • 2000
  • Many object oriented database systems manage object buffers to provide fast access to objects. Traditional buffer replacement algorithms based on fixed length pages simply assume that the cost incurred by operating a buffer is propertional to the number of buffer faults. However, this assumption no longer holds in an objects buffer where objects are of variable length and the cost of replacing an object varies for each object. In this paper, we propose a cost based replacement algorithm for object buffers. The proposed algorithm replaces the have minimum costs per unit time and unit space. The cost model extends the previous page based one to include the replacement costs and the sizes of objects. The performance tests show that proposed algorithm is almost always superior to the LRU-2 algorithm and in some cases is more than twice as fast. The idea of cost based replacement can be applied to any buffer management architectures that adopt earlier algorithms. It is especially useful in object oriented database systems where there is significant variation in replacement costs.

  • PDF

Analyzing Virtual Memory Write Characteristics and Designing Page Replacement Algorithms for NAND Flash Memory (NAND 플래시메모리를 위한 가상메모리의 쓰기 참조 분석 및 페이지 교체 알고리즘 설계)

  • Lee, Hye-Jeong;Bahn, Hyo-Kyung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.6
    • /
    • pp.543-556
    • /
    • 2009
  • Recently, NAND flash memory is being used as the swap device of virtual memory as well as the file storage of mobile systems. Since temporal locality is dominant in page references of virtual memory, LRU and its approximated CLOCK algorithms are widely used. However, cost of a write operation in flash memory is much larger than that of a read operation, and thus a page replacement algorithm should consider this factor. This paper analyzes virtual memory read/write reference patterns individually, and observes the ranking inversion problem of temporal locality in write references which is not observed in read references. With this observation, we present a new page replacement algorithm considering write frequency as well as temporal locality in estimating write reference behaviors. This new algorithm dynamically allocates memory space to read/write operations based on their reference patterns and I/O costs. Though the algorithm has no external parameter to tune, it supports optimized implementations for virtual memory systems, and also performs 20-66% better than CLOCK, CAR, and CFLRU algorithms.

A Study on Demand Paging For NAND Flash Memory Storages (NAND 플래시 메모리 저장장치를 위한 요구 페이징 기법 연구)

  • Yoo, Yoon-Suk;Ryu, Yeon-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.5
    • /
    • pp.583-593
    • /
    • 2007
  • We study the page replacement algorithms for demand paging, called CFLRU/C, CFLRU/E and DL-CFLRU/E, that reduce the number of erase operations and improve the wear-leveling degree of flash memory. Under the CFLRU/C and CFLRU/E algorithms, the victim page is the least recently used dean page within the pre-specified window. However, when there is not any dean page within the window, the CFLRU/C evicts the dirty page with the lowest frequency while the CFLRU/E evicts the dirty page with the highest number of erase operations. The DL-CFLRU/E algorithm maintains two page lists called the dean page list and the dirty page list, and first finds the page within the dean page list when it selects a victim. However, when it can not find any dean page within the dean page list, it evicts the dirty page with the highest number of erase operations within the window of the dirty page list. In this thesis, we show through simulation that the proposed schemes reduce the number of erase operations and improve the wear-leveling than the existing schemes like LRU.

  • PDF

Modeling of Data References with Temporal Locality and Popularity Bias (시간 지역성과 인기 편향성을 가진 데이터 참조의 모델링)

  • Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • This paper proposes a new reference model that can represent data access with temporal locality and popularity bias. Among existing reference models, the LRU-stack model can express temporal locality, which is a characteristic that the more recently referenced data has, the higher the probability of being referenced again. However, it cannot take into account differences in popularity of the data. Conversely, the independent reference model can reflect the different popularity of data, but has the limitation of not being able to model changes in data reference trends over time. The reference model presented in this paper overcomes the limitations of these two models and has the feature of reflecting both the popularity bias of data and their changes over time. This paper also examines the relationship between the cache replacement algorithm and the reference model, and shows the optimality of the proposed model.

A Study on Performance of Content Store Replacement Algorithms over Vehicular CCN (VCCN에서 Content Store 교체 알고리즘의 성능에 관한 연구)

  • Choi, Jong-In;Kang, Seung-Seok
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.495-500
    • /
    • 2020
  • VANET (Vehicular Ad Hoc Network), an example of an ad hoc vehicular networks, becomes one of the popular research areas together with the self-driving cars and the connected cars. In terms of the VANET implementation, the traditional TCP/IP protocol stack could be applied to VANET. Recently, CCN (Content Centric Networking) shows better possibility to apply to VANET, called VCCN (VANET over CCN). CCN maintains several data tables including CS (Content Store) which keeps track of the currently requested content segments. When the CS becomes full and new content should be stored in CS, a replacement algorithm is needed. This paper compares and contrasts four replacement algorithms. In addition, it analyzes the transmission characteristics in diverse network conditions. According to the simulation results, LRU replacement algorithm shows better performances than the remaining three algorithms. In addition, even the size of CS is small, the network maintains a reasonable transmission performance. As the CS size becomes larger, the transmission rate increases proportionally. The transmission performance decreases when the network is crowded as well as the number of transmission hops becomes large.

Web Caching Strategy based on Documents Popularity (선호도 기반 웹 캐싱 전략)

  • Yoo, Hae-Young;Park, Chel
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.530-538
    • /
    • 2002
  • In this paper, we propose a new caching strategy for web servers. The proposed algorithm collects on]y the statistics of the requested file, for example the popularity, when a request arrives. And, at times, only files with higher popularity are cached all together. Because the cache remains unchanged until the cache is made newly, web server can use very efficient data structure for cache to determine whether a file is in the cache or not. This increases greatly tile efficiency of cache manipulation. Furthermore, the experiment that is performed with real log files built by web servers shows that the cache hit ratio and the cache hit ratio are better than those produced by LRU. The proposed algorithm has a drawback such that the cache hit ratio may decrease when the popularity of files that is not in the cache explodes instantaneously. But in our opinion, such explosion happens infrequently, and it is easy to implement the web servers to adapt them to such unusual cases.

CPWL : Clock and Page Weight based Disk Buffer Management Policy for Flash Memory Systems

  • Kang, Byung Kook;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2020
  • The use of NAND flash memory is continuously increased with the demand of mobile data in the IT industry environment. However, the erase operations in flash memory require longer latency and higher power consumption, resulting in the limited lifetime for each cell. Therefore, frequent write/erase operations reduce the performance and the lifetime of the flash memory. In order to solve this problem, management techniques for improving the performance of flash based storage by reducing write and erase operations of flash memory with using disk buffers have been studied. In this paper, we propose a CPWL to minimized the number of write operations. It is a disk buffer management that separates read and write pages according to the characteristics of the buffer memory access patterns. This technique increases the lifespan of the flash memory and decreases an energy consumption by reducing the number of writes by arranging pages according to the characteristics of buffer memory access mode of requested pages.