• Title/Summary/Keyword: 캐쉬교체

Search Result 59, Processing Time 0.028 seconds

쓰기 정보를 감안한 객체들의 다중 선채취

  • 도용석;박경렬;남인길
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 1998.10a
    • /
    • pp.815-825
    • /
    • 1998
  • 이 논문은 객체지향 데이터베이스 관리시스템에서 캐쉬의 효율성을 개선하기 위한 일련의 기술이다. 제안된 방법은 두단계로 나누어진다. 첫 번째 단계에서는 다양한 질의에 대한 디스크 접근 빈도수가 방식에 대해 분석하였으며 , 두 번째 단계에서는 첫 번째 단계의 분석된 결과를 바탕으로 접근 빈도가 높은 페이지를 선채취하였다. 이 연구에서는 기존의 선채취 기법에 쓰기 정보를 감안한 방법을 추가하여 다양한 선채취 캐슁기법을 제안한다. 기본적으로 이 방법은 정보변경이 일어난 페이지에 대해 쓰기비용이 발생되므로 교체를 지연한다. 실험결과는 일관되게 현존하는 알고리즘 보다 나은 결과를 보여준다.

An Efficient Buffer Cache Management Algorithm based on Prefetching (선반입을 이용한 효율적인 버퍼 캐쉬 관리 알고리즘)

  • Jeon, Heung-Seok;Noh, Sam-Hyeok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.5
    • /
    • pp.529-539
    • /
    • 2000
  • This paper proposes a prefetch-based disk buffer management algorithm, which we call W2R (Veighingjwaiting Room). Instead of using elaborate prefetching schemes to decide which blockto prefetch and when, we simply follow the LRU-OBL (One Block Lookahead) approach and prefetchthe logical next block along with the block that is being referenced. The basic difference is that theW2R algorithm logically partitions the buffer into two rooms, namely, the Weighing Room and theWaiting Room. The referenced, hence fetched block is placed in the Weighing Room, while theprefetched logical next block is placed in the Waiting Room. By so doing, we alleviate some inherentdeficiencies of blindly prefetching the logical next block of a referenced block. Specifically, a prefetchedblock that is never used may replace a possibly valuable block and a prefetched block, thoughreferenced in the future, may replace a block that is used earlier than itself. We show through tracedriven simulation that for the workloads and the environments considered the W2R algorithm improvesthe hit rate by a maximum of 23.19 percentage points compared to the 2Q algorithm and a maximumof 10,25 percentage feints compared to the LRU-OBL algorithm.

  • PDF

A mobile data caching synchronization strategy based on in-demand replacement priority (수요에 따른 교체 우선 순위 기반 모바일 데이터베이스 캐쉬 동기화 정책)

  • Zhao, Jinhua;Xia, Ying;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.13-21
    • /
    • 2012
  • Mobile data caching is usually used as an effective way to improve the speed of local transaction processing and reduce server load. In mobile database environment, due to its characters - low bandwidth, excessive latency and intermittent network, caching is especially crucial. A lot of mobile data caching strategies have been proposed to handle these problems over the last few years. However, with smart phone widely application these approaches cannot support vast data requirements efficiently. In this paper, to make full use of cache data, lower wireless transmission quantity and raise transaction success rate, we design a new mobile data caching synchronization strategy based on in-demand and replacement priority. We experimentally verify that our techniques significantly reduce quantity of wireless transmission and improve transaction success rate, especially when mobile client request a large amount of data.

An Efficient Caching Strategy in Data Broadcasting (데이터 방송 환경에서의 효율적인 캐슁 정책)

  • Kim, Su-Yeon;Choe, Yang-Hui
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1476-1484
    • /
    • 1999
  • TV 방송 분야에서 다양한 정보와 상호 작용성을 제공하기 위해서 최근 기존 방송 내용인 A/V 스트림 외 부가정보 방송이 시도되고 있다. 데이타 방송에 대한 기존 연구는 대부분 고정된 내용의 데이타를 방송하는 환경을 가정하고 있어서 그 결과가 방송 내용의 변화가 많은 환경에 부적합하다. 본 논문에서는 데이타에 대한 접근이 반복되지 않을 가능성이 높고 사용자 접근 확률을 예상하기 어려운 상황에서 응답 시간을 개선하는 방안으로 수신 데이타를 무조건 캐쉬에 반입하고 교체가 필요한 경우 다음 방송 시각이 가장 가까운 페이지를 축출하는 사용자 단말 시스템에서의 캐슁 정책을 제안하였다. 제안된 캐쉬 관리 정책은 평균적인 캐쉬 접근 실패 비용을 줄임으로써 사용자 응답 시간을 개선하며, 서로 다른 스케줄링 기법을 사용하는 다양한 방송 제공자가 공존하는 환경에서 보편적으로 효과를 가져올 수 있다.Abstract Recently, many television broadcasters have tried to disseminate digital multimedia data in addition to the traditional content (audio-visual stream). The broadcast data need to be cached by a client system, to provide a reasonable response time for a user request. Previous studies assumed the dissemination of a fixed set of items, and the results are not suitable when broadcast items are frequently changed. In this paper, we propose a novel cache management scheme that chooses the replacement victim based on the remaining time to the next broadcast instance. The proposed scheme reduces response time, where it is hard to predict the probability distribution of user accesses. The caching policy we present here significantly reduces expected response time by minimizing expected cache miss penalty, and can be applied without difficulty to different scheduling algorithms.

Block Replacement Scheme based on Reuse Interval for Hybrid SSD System (Hybrid SSD 시스템을 위한 재사용 간격 기반 블록 교체 기법)

  • Yoo, Sanghyun;Kim, Kyung Tae;Youn, Hee Yong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.5
    • /
    • pp.19-27
    • /
    • 2015
  • Due to the advantages of fast read/write operation and low power consumption, SSD(Solid State Drive) is now widely adopted as storage device of smart phone, laptop computer, server, etc. However, the shortcomings of SSD such as limited number of write operations and asymmetric read/write operation lead to the problem of shortened life span of SSD. Therefore, the block replacement policy of SSD used as cache for HDD is very important. The existing solutions for improving the lifespan of SSD including the LARC scheme typically employ the LRU algorithm to manage the SSD blocks, which may increase the miss rate in SSD due to the replacement of frequently used block instead of rarely used block. In this paper we propose a novel block replacement scheme which considers the block reuse interval to effectively handle various data read/write patterns. The proposed scheme replaces the block in SSD based on the recency decided by reuse interval and age along with hit ratio. Computer simulation using workload trace files reveals that the proposed scheme consistently improves the performance and lifespan of SSD by increasing the hit ratio and decreasing the number of write operations compared to the existing schemes including LARC.

HIPSS : A RAID System for SPAX (HIPSS : SPAX(주전산기 IV) RAID시스템)

  • 이상민;안대영;김중배;김진표;이해동
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.9-19
    • /
    • 1998
  • RAID technology that provides the disk I/O system with high performance and high availability is essential for OLTP server. This paper describes the design and implementation of the HIPSS RAID system that has been developed for the SPAX OLTP server. HIPSS has the following design objectives: high performance, high availability, standardization and modularization of external interface, and ease of maintenance. It guarantees high performance by providing 10 independent I/O channels, large data cache, and parity calculation engine. Hardware modularization of the host interface makes it easy to replace host interface hardware module. By providing dual power supply, dual array controller, and disk hot swapping, it provides the system with high availability Implementation of HIPSS and integration test on SPAX has been completed and performance measurement on HIPSS is now going on. In this paper, we provide the detail description for HIPSS system architecture and the implementation results.

  • PDF

Design of an Asynchronous Instruction Cache based on a Mixed Delay Model (혼합 지연 모델에 기반한 비동기 명령어 캐시 설계)

  • Jeon, Kwang-Bae;Kim, Seok-Man;Lee, Je-Hoon;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.64-71
    • /
    • 2010
  • Recently, to achieve high performance of the processor, the cache is splits physically into two parts, one for instruction and one for data. This paper proposes an architecture of asynchronous instruction cache based on mixed-delay model that are DI(delay-insensitive) model for cache hit and Bundled delay model for cache miss. We synthesized the instruction cache at gate-level and constructed a test platform with 32-bit embedded processor EISC to evaluate performance. The cache communicates with the main memory and CPU using 4-phase hand-shake protocol. It has a 8-KB, 4-way set associative memory that employs Pseudo-LRU replacement algorithm. As the results, the designed cache shows 99% cache hit ratio and reduced latency to 68% tested on the platform with MI bench mark programs.

Modeling of Data References with Temporal Locality and Popularity Bias (시간 지역성과 인기 편향성을 가진 데이터 참조의 모델링)

  • Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • This paper proposes a new reference model that can represent data access with temporal locality and popularity bias. Among existing reference models, the LRU-stack model can express temporal locality, which is a characteristic that the more recently referenced data has, the higher the probability of being referenced again. However, it cannot take into account differences in popularity of the data. Conversely, the independent reference model can reflect the different popularity of data, but has the limitation of not being able to model changes in data reference trends over time. The reference model presented in this paper overcomes the limitations of these two models and has the feature of reflecting both the popularity bias of data and their changes over time. This paper also examines the relationship between the cache replacement algorithm and the reference model, and shows the optimality of the proposed model.

Segment-based Cache Replacement Policy in Transcoding Proxy (트랜스코딩 프록시에서 세그먼트 기반 캐쉬 교체 정책)

  • Park, Yoo-Hyun;Kim, Hag-Young;Kim, Kyong-Sok
    • The KIPS Transactions:PartA
    • /
    • v.15A no.1
    • /
    • pp.53-60
    • /
    • 2008
  • Streaming media has contributed to a significant amount of today's Internet Traffic. Like traditional web objects, rich media objects can benefit from proxy caching, but caching streaming media is more of challenging than caching simple web objects, because the streaming media have features such as huge size and high bandwidth. And to support various bandwidth requirements for the heterogeneous ubiquitous devices, a transcoding proxy is usually necessary to provide not only adapting multimedia streams to the client by transcoding, but also caching them for later use. The traditional proxy considers only a single version of the objects, whether they are to be cached or not. However the transcoding proxy has to evaluate the aggregate effect from caching multiple versions of the same object to determine an optimal set of cache objects. And recent researches about multimedia caching frequently store initial parts of videos on the proxy to reduce playback latency and archive better performance. Also lots of researches manage the contents with segments for efficient storage management. In this paper, we define the 9-events of transcoding proxy using 4-atomic events. According to these events, the transcoding proxy can define the next actions. Then, we also propose the segment-based caching policy for the transcoding proxy system. The performance results show that the proposing policy have a low delayed start time, high byte-hit ratio and less transcoding data.