• Title/Summary/Keyword: Cache Hit-Rate

Search Result 58, Processing Time 0.026 seconds

Block Level Refinement of Popularity-Aware Interval Caching for Multimedia Streaming Servers (멀티미디어 스트리밍 서버를 위한 인기도 기반 인터벌 캐슁의 블록 수준 세분화 기법)

  • Kwon, Oh-Hoon;Kim, Tae-Seok;Bahn, Hyo-Kyung;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.4
    • /
    • pp.138-144
    • /
    • 2007
  • With recent proliferation of video-on-demand services, caching in a multimedia streaming server is becoming increasingly important. Previous studies have shown that request interval based caching and its extension for considering different video popularity performs well in various streaming environments. In this paper, we show that block level refinement of this existing scheme can further improve the performance of streaming servers. Trace driven simulations with real world VOD traces have shown that the proposed scheme improves the cache hit rate and the startup latency.

Proxy Caching Grouping by Partition and Mapping for Distributed Multimedia Streaming Service (분산 멀티미디어 스트리밍 서비스를 위한 분할과 사상에 의한 프록시 캐싱 그룹화)

  • Lee, Chong-Deuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.40-47
    • /
    • 2009
  • Recently, dynamic proxy caching has been proposed on the distributed environment so that media objects by user's requests can be served directly from the proxy without contacting the server. However, it makes caching challenging due to multimedia large sizes, low latency and continuous streaming demands of media objects. To solve the problems caused by streaming demands of media objects, this paper has been proposed the grouping scheme with fuzzy filtering based on partition and mapping. For partition and mapping, this paper divides media block segments into fixed partition reference block(R$_f$P) and variable partition reference block(R$_v$P). For semantic relationship, it makes fuzzy relationship to performs according to the fixed partition temporal synchronization(T$_f$) and variable partition temporal synchronization(T$_v$). Simulation results show that the proposed scheme makes streaming service efficiently with a high average request response time rate and cache hit rate and with a low delayed startup ratio compared with other schemes.

An Efficient Buffer Cache Management Algorithm based on Prefetching (선반입을 이용한 효율적인 버퍼 캐쉬 관리 알고리즘)

  • Jeon, Heung-Seok;Noh, Sam-Hyeok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.5
    • /
    • pp.529-539
    • /
    • 2000
  • This paper proposes a prefetch-based disk buffer management algorithm, which we call W2R (Veighingjwaiting Room). Instead of using elaborate prefetching schemes to decide which blockto prefetch and when, we simply follow the LRU-OBL (One Block Lookahead) approach and prefetchthe logical next block along with the block that is being referenced. The basic difference is that theW2R algorithm logically partitions the buffer into two rooms, namely, the Weighing Room and theWaiting Room. The referenced, hence fetched block is placed in the Weighing Room, while theprefetched logical next block is placed in the Waiting Room. By so doing, we alleviate some inherentdeficiencies of blindly prefetching the logical next block of a referenced block. Specifically, a prefetchedblock that is never used may replace a possibly valuable block and a prefetched block, thoughreferenced in the future, may replace a block that is used earlier than itself. We show through tracedriven simulation that for the workloads and the environments considered the W2R algorithm improvesthe hit rate by a maximum of 23.19 percentage points compared to the 2Q algorithm and a maximumof 10,25 percentage feints compared to the LRU-OBL algorithm.

  • PDF

A Transaction Level Simulator for Performance Analysis of Solid-State Disk (SSD) in PC Environment (PC향 SSD의 성능 분석을 위한 트랜잭션 수준 시뮬레이터)

  • Kim, Dong;Bang, Kwan-Hu;Ha, Seung-Hwan;Chung, Sung-Woo;Chung, Eui-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.12
    • /
    • pp.57-64
    • /
    • 2008
  • In this paper, we propose a system-level simulator for the performance analysis of a Solid-State Disk (SSD) in PC environment by using TLM (Transaction Level Modeling) method. Our method provides quantitative analysis for a variety of architectural choices of PC system as well as SSD. Also, it drastically reduces the analysis time compared to the conventional RTL (Register Transfer Level) modeling method. To show the effectiveness of the proposed simulator, we performed several explorations of PC architecture as well as SSD. More specifically, we measured the performance impact of the hit rate of a cache buffer which temporarily stores the data from PC. Also, we analyzed the performance variation of SSD for various NAND Flash memories which show different response time with our simulator. These experimental results show that our simulator can be effectively utilized for the architecture exploration of SSD as well as PC.

Content Distribution for 5G Systems Based on Distributed Cloud Service Network Architecture

  • Jiang, Lirong;Feng, Gang;Qin, Shuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4268-4290
    • /
    • 2015
  • Future mobile communications face enormous challenges as traditional voice services are replaced with increasing mobile multimedia and data services. To address the vast data traffic volume and the requirement of user Quality of Experience (QoE) in the next generation mobile networks, it is imperative to develop efficient content distribution technique, aiming at significantly reducing redundant data transmissions and improving content delivery performance. On the other hand, in recent years cloud computing as a promising new content-centric paradigm is exploited to fulfil the multimedia requirements by provisioning data and computing resources on demand. In this paper, we propose a cooperative caching framework which implements State based Content Distribution (SCD) algorithm for future mobile networks. In our proposed framework, cloud service providers deploy a plurality of cloudlets in the network forming a Distributed Cloud Service Network (DCSN), and pre-allocate content services in local cloudlets to avoid redundant content transmissions. We use content popularity and content state which is determined by content requests, editorial updates and new arrivals to formulate a content distribution optimization model. Data contents are deployed in local cloudlets according to the optimal solution to achieve the lowest average content delivery latency. We use simulation experiments to validate the effectiveness of our proposed framework. Numerical results show that the proposed framework can significantly improve content cache hit rate, reduce content delivery latency and outbound traffic volume in comparison with known existing caching strategies.

A Cache Management Technique for an Efficient Video Proxy Server (효율적인 비디오 프록시 서버를 위한 캐시 관리 방법)

  • Lee, Jun-Pyo;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.82-88
    • /
    • 2009
  • Video proxy server which is located near clients can store the frequently requested video data in storage space in order to minimize initial latency and network traffic significantly. However, due to the limited storage space in video proxy server, an appropriate video selection method is needed to store the videos which are frequently requested by users. Thus, we present a virtual caching technique to efficiently store the video in video proxy server. For this purpose, we employ a virtual memory in video poky server. If the video is requested by user, it is loaded in virtual memory first and then, delivered to the user. A video which is loaded in virtual memory is deleted or moved into the storage space of video poxy sewer depending on the request condition. In addition, virtual memory is divided into each segment area in order to store the segments efficiently and to avoid the fragmentation. The simulation results show that the proposed method performs better than other methods in terms of the block hit rate and the number of block deletion.

Web Prefetching Scheme for Efficient Internet Bandwidth Usage (효율적인 인터넷 대역폭 사용을 위한 웹 프리페칭 기법)

  • Kim, Suk-Hyang;Hong, Won-Gi
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.3
    • /
    • pp.301-314
    • /
    • 2000
  • As the number of World Wide Web (Web) users grows, Web traffic continues to increase at an exponential rate. Currently, Web traffic is one of the major components of Internet traffic. Also, high bandwodth usage due to Web traffic is observed during peak periods while leaving bandwidth usage idle during off-peak periods. One of the solutions to reduce Web traffic and speed up Web access is through the use of Web caching. Unfortunately, Web caching has limitations for reducing network bandwidth usage during peak periods. In this paper, we focus our attention on the use of a prefetching algorithm for reducing bandwidth during peak periods by using off-peak period bandwidth. We propose a statistical, batch, proxy-side prefetching scheme that improves cache hit rate while only requiring a small amount of storage. Web objects that were accessed many times in previous 24 hours but would be expired in the next 24 hours, are selected and prefetched in our scheme. We present simulation results based on Web proxy and show that this prefetching algorithm can reduce peak time bandwidth using off-peak bandwidth.

  • PDF

A Dynamic Transaction Routing Algorithm with Primary Copy Authority (주사본 권한을 이용한 동적 트랜잭션 분배 알고리즘)

  • Kim, Ki-Hyung;Cho, Hang-Rae;Nam, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1067-1076
    • /
    • 2003
  • Database sharing system (DSS) refers to a system for high performance transaction processing. In DSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. In this paper, we propose a dynamic transaction routing algorithm to balance the load of each node in the DSS. The proposed algorithm is novel in the sense that it can support node-specific locality of reference by utilizing the primary copy authority assigned to each node; hence, it can achieve better cache hit ratios and thus fewer disk I/Os. Furthermore, the proposed algorithm avoids a specific node being overloaded by considering the current workload of each node. To evaluate the performance of the proposed algorithm, we develop a simulation model of the DSS, and then analyze the simulation results. The results show that the proposed algorithm outperforms the existing algorithms in the transaction processing rate. Especially the proposed algorithm shows better performance when the number of concurrently executed transactions is high and the data page access patterns of the transactions are not equally distributed.