• Title/Summary/Keyword: 페이지 캐시

Search Result 61, Processing Time 0.027 seconds

Refreshing technique based on reference period and update period (접근 주기와 실제 갱신 주기에 기초한 리프레싱 기법)

  • 김인태;김기창
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.512-514
    • /
    • 1998
  • 요약 최근 들어 인터넷 데이터를 사용자와 가까운 위치 복사해 놓음으로써 인터넷 병목현상을 줄이는 인터넷 캐시 서버의 사용이 증가되고 있다. 캐시 서버의 성능은 캐시 적중률과 캐시된 데이터의 신선도에 의해 좌우된다고 할 수 있다. 데이터의 신선도는 캐싱된 데이터가 가장 최근에 갱신된 데이터와 일치할 확률이다. 신선도 유지를 위해 가장 많이 사용되고 있는 방법은 데이터마다 만료시간을 부여하여 만료시간이 지났을 때 새로운 데이터를 요청하는 방법이다. 하지만, 적절하지 않은 만료시간의 설정은 네트웍 교통량 증가나 사용자에게 신선하지 않은 데이터를 전달하는 무제가 생긴다. 본 논문에서는 각 데이터의 접근 주기와 실제 갱신주기에 기초하여 그 페이지의 리프레시 시간을 설정함으로써 요청될 가능성이 높은 페이지들만을 선택하여 프리패칭이 될 수 있도록 하는 기법을 제안한다.

  • PDF

An Optimal Technic to Utilize Resource on Extended Web Cache Server (확장된 웹 캐시 서버에서 자원이용률 최적화 기법)

  • 김원기;김두상;김성락;구용완
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10e
    • /
    • pp.184-186
    • /
    • 2002
  • 대규모 웹 캐시 서버의 자원 이용도는 네트워크와 디스크 I/O 대기 시간에 주로 의존하고 또한 작업 부하 패턴에 있어 네트웍 사용이 폭주하는 시간과 새벽과 같은 한가한 시간간의 변동성이 심하다. 따라서, 한정된 자원범위에서 최상의 서비스를 제공키위해서는 절정기 동안 자원 이용도를 낮추고 이들 작업부하를 비절정기 때에 나누어 수행토록 함으로써 자원 활용도를 최대로 끌어 올리자는데, 연구의 목적이 있다 이를 위해 비절정기 동안 캐시압축 기법을 이용하여 디스크 입출력 작업을 미래예측 기법은 어느 점에서의 실제 작업 세트가 작았다는 것과 페이지 재사용 패턴의 정확한 예측은 물리적 메모리 크기의 캐시에서 높은 히트율을 생산할 것이라는 점을 보여주었다.

  • PDF

A Policy of Page Management Using Double Cache for NAND Flash Memory File System (NAND 플래시 메모리 파일 시스템을 위한 더블 캐시를 활용한 페이지 관리 정책)

  • Park, Myung-Kyu;Kim, Sung-Jo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.412-421
    • /
    • 2009
  • Due to the physical characteristics of NAND flash memory, overwrite operations are not permitted at the same location, and therefore erase operations are required prior to rewriting. These extra operations cause performance degradation of NAND flash memory file system. Since it also has an upper limit to the number of erase operations for a specific location, frequent erases should reduce the lifetime of NAND flash memory. These problems can be resolved by delaying write operations in order to improve I/O performance: however, it will lower the cache hit ratio. This paper proposes a policy of page management using double cache for NAND flash memory file system. Double cache consists of Real cache and Ghost cache to analyze page reference patterns. This policy attempts to delay write operations in Ghost cache to maintain the hit ratio in Real cache. It can also improve write performance by reducing the search time for dirty pages, since Ghost cache consists of Dirty and Clean list. We find that the hit ratio and I/O performance of our policy are improved by 20.57% and 20.59% in average, respectively, when comparing them with the existing policies. The number of write operations is also reduced by 30.75% in average, compared with of the existing policies.

Framework-assisted Selective Page Protection for Improving Interactivity of Linux Based Mobile Devices (리눅스 기반 모바일 기기에서 사용자 응답성 향상을 위한 프레임워크 지원 선별적 페이지 보호 기법)

  • Kim, Seungjune;Kim, Jungho;Hong, Seongsoo
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1486-1494
    • /
    • 2015
  • While Linux-based mobile devices such as smartphones are increasingly used, they often exhibit poor response time. One of the factors that influence the user-perceived interactivity is the high page fault rate of interactive tasks. Pages owned by interactive tasks can be removed from the main memory due to the memory contention between interactive and background tasks. Since this increases the page fault rate of the interactive tasks, their executions tend to suffer from increased delays. This paper proposes a framework-assisted selective page protection mechanism for improving interactivity of Linux-based mobile devices. The framework-assisted selective page protection enables the run-time system to identify interactive tasks at the framework level and to deliver their IDs to the kernel. As a result, the kernel can maintain the pages owned by the identified interactive tasks and avoid the occurrences of page faults. The experimental results demonstrate the selective page protection technique reduces response time up to 11% by reducing the page fault rate by 37%.

A Multi-Level Flash Translation Layer for Large Capacity Solid State Drives

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.11-18
    • /
    • 2021
  • The flash translation layer(FTL) of SSD maps the logical page number requested from the host to the actual recorded flash memory page number. It is very important to reduce the amount of RAM used to manage the mapping information. In the existing demand-based FTLs, two-level method is applied in which mapping information is also recorded in flash memory pages and only their addresses are managed as a table in RAM. As the capacities of SSDs are growing to tens of terabytes, the amount of RAM for mapping table becomes too large. In this paper, ML-FTL was proposed as a method of managing mapping information in three levels to reduce the amount of RAM required drastically. From an evaluation, the increase in overhead was minimal compared to the conventional two-level method by properly utilizing cache.

An Efficient Address Mapping Table Management Scheme for NAND Flash Memory File System Exploiting Page Address Cache (페이지 주소 캐시를 활용한 NAND 플래시 메모리 파일시스템에서의 효율적 주소 변환 테이블 관리 정책)

  • Kim, Cheong-Ghil
    • Journal of Digital Contents Society
    • /
    • v.11 no.1
    • /
    • pp.91-97
    • /
    • 2010
  • Flash memory has been used by many digital devices for data storage, exploiting the advantages of non-volatility, low power, stability, and so on, with the help of high integrity, large capacity, and low price. As the fast growing popularity of flash memory, the density of it increases so significantly that its entire address mapping table becomes too big to be stored in SRAM. This paper proposes the associated page address cache with an efficient table management scheme for hybrid flash translation layer mapping. For this purpose, all tables are integrated into a map block containing entire physical page tables. Simulation results show that the proposed scheme can save the extra memory areas and decrease the searching time with less 2.5% of miss ratio on PC workload and can decrease the write overhead by performing write operation 33% out of total writes requested.

A Buffer Cache Replacement Algorithm for Considering both Hybrid Main Memory and Storage (하이브리드 메인 메모리와 스토리지의 특성을 고려한 버퍼 캐시 교체 정책)

  • Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.947-953
    • /
    • 2015
  • PRAM is being considered as a potential successor to DRAM because of its characteristics such as byte-addressability, non-volatility, and high density. To gain its benefits, buffer cache replacement algorithm based on PRAM has been actively studied. However, most of the previous studies on buffer cache replacement algorithm limitedly exploit the byte-level performance of PRAM by focusing its limited lifetime and slower access latency compared to DRAM. In this paper, we propose a novel buffer cache replacement algorithm that fully considers the byte-level performance of PRAM and the performance of secondary storage. To take advantage of small size write on PRAM, proposed scheme keeps pages, which are frequently accessed with a small size write, on PRAM and allows the selective page migration from DRAM to PRAM. As a result, our scheme significantly reduces the number of PRAM writes. Our experimental results indicate for real workloads that our scheme reduces the number of PRAM writes by up to 92% and improves its performance by up to 62% compared to CLOCK.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Scheduling based on Cache Utilization in a Cache Server Cluster for Wireless Internet (무선 인터넷을 위한 캐시 서버 클러스터 환경에서 캐시 이용률 기반의 스케줄링)

  • Kwak, Hu-Keun;Chung, Kyu-Sik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.435-444
    • /
    • 2007
  • Caching web pages is an important part of web infrastructures. The effects of caching service are even more pronounced for wireless infrastructures due to their limited bandwidth. Medium to large-scale infrastructures deploy a cluster of servers to solve the scalability problem and hot spot problem inherent in caching. In this paper we present scheduling scheme based on cache utilization in a wireless internet proxy server cluster environment. The proposed method uses cache utilization for distributing evenly client requests to a cluster of cache servers and solving hot spot problem. We have implemented our approach and performed various experiments using publicly available traces. Experimental results on a cluster of 16 cache servers demonstrate that the proposed hashing method gives 45% to 114% Performance improvement over other widely used methods while addressing the hot spot problem.

Prefetch R-tree: A Disk and Cache Optimized Multidimensional Index Structure (Prefetch R-tree: 디스크와 CPU 캐시에 최적화된 다차원 색인 구조)

  • Park Myung-Sun
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.463-476
    • /
    • 2006
  • R-trees have been traditionally optimized for the I/O performance with the disk page as the tree node. Recently, researchers have proposed cache-conscious variations of R-trees optimized for the CPU cache performance in main memory environments, where the node size is several cache lines wide and more entries are packed in a node by compressing MBR keys. However, because there is a big difference between the node sizes of two types of R-trees, disk-optimized R-trees show poor cache performance while cache-optimized R-trees exhibit poor disk performance. In this paper, we propose a cache and disk optimized R-tree, called the PR-tree (Prefetching R-tree). For the cache performance, the node size of the PR-tree is wider than a cache line, and the prefetch instruction is used to reduce the number of cache misses. For the I/O performance, the nodes of the PR-tree are fitted into one disk page. We represent the detailed analysis of cache misses for range queries, and enumerate all the reasonable in-page leaf and nonleaf node sizes, and heights of in-page trees to figure out tree parameters for best cache and I/O performance. The PR-tree that we propose achieves better cache performance than the disk-optimized R-tree: a factor of 3.5-15.1 improvement for one-by-one insertions, 6.5-15.1 improvement for deletions, 1.3-1.9 improvement for range queries, and 2.7-9.7 improvement for k-nearest neighbor queries. All experimental results do not show notable declines of the I/O performance.