• Title/Summary/Keyword: cache management scheme

Search Result 78, Processing Time 0.026 seconds

Proxy-based Caching Optimization for Mobile Ad Hoc Streaming Services (모바일 애드 혹 스트리밍 서비스를 위한 프록시 기반 캐싱 최적화)

  • Lee, Chong-Deuk
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.207-215
    • /
    • 2012
  • This paper proposes a proxy-based caching optimization scheme for improving the streaming media services in wireless mobile ad hoc networks. The proposed scheme utilizes the proxy for data packet transmission between media server and nodes in WLANs, and the proxy locates near the wireless access pointer. For caching optimization, this paper proposes NFCO (non-full cache optimization) and CFO (cache full optimization) scheme. When performs the streaming in the proxy, the NFCO and CFO is to optimize the caching performance. This paper compared the performance for optimization between the proposed scheme and the server-based scheme and rate-distortion scheme. Simulation results show that the proposed scheme has better performance than the existing server-only scheme and rate distortion scheme.

Low-power Buffer Cache Management for Mixed HDD and SSD Storage Systems (HDD와 SSD의 혼합형 저장 시스템을 위한 절전형 버퍼 캐쉬 관리)

  • Kang, Hyo-Jung;Park, Jun-Seok;Koh, Kern;Bahn, Hyo-Kyung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.462-466
    • /
    • 2010
  • A new buffer cache management scheme that aims at reducing power consumption in mixed HDD and NAND flash memory storage systems is presented. The proposed scheme reduces power consumption by considering different energy-consumption rate of storage devices, I/O operation type (read or write), and reference potential of cached blocks in terms of both recency and frequency. Simulation shows that the proposed scheme reduces power consumption by 18.0% on average and up to 58.9%.

An Efficient Cache Management Scheme for Load Balancing in Distributed Environments with Different Memory Sizes (상이한 메모리 크기를 가지는 분산 환경에서 부하 분산을 위한 캐시 관리 기법)

  • Choi, Kitae;Yoon, Sangwon;Park, Jaeyeol;Lim, Jongtae;Lee, Seokhee;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.543-548
    • /
    • 2015
  • Recently, volume of data has been growing dramatically along with the growth of social media and digital devices. However, the existing disk-based distributed file systems have limits to their performance of data processing or data access, due to I/O processing costs and bottlenecks. To solve this problem, the caching technique is being used to manage data in the memory. In this paper, we propose a cache management scheme to handle load balancing in a distributed memory environment. The proposed scheme distributes the data according to the memory size, n distributed environments with different memory sizes. If overloaded nodes occur, it redistributes the the access time of the caching data. In order to show the superiority of the proposed scheme, we compare it with an existing distributed cache management scheme through performance evaluation.

Design and evaluation of a fuzzy cooperative caching scheme for MANETs

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.605-619
    • /
    • 2010
  • Caching of frequently accessed data in multi-hop ad hoc environment is a technique that can improve data access performance and availability. Cooperative caching, which allows sharing and coordination of cached data among several clients, can further en-hance the potential of caching techniques. In this paper, we propose a fuzzy cooperative caching scheme in mobile ad hoc networks. The cache management of the proposed caching scheme not only uses adaptively CacheData or CachePath based on data sim-ilarity and data utility, but also uses the replacement manager based on data pro t. Also, the proposed caching scheme uses a prefetch manager. When the TTL of the cached data expires, the prefetch manager evaluates the popularity index of the data. If the popularity index is larger than a threshold, the data is prefetched. Otherwise, its space is released. The performance of the proposed scheme is evaluated analytically and is compared to that of other cooperative caching schemes.

An Efficient Location Cache Scheme for 3-level Database Architecture in PCS Networks (PCS 네트워크에서 3-레벨 데이터베이스 구조를 위한 효과적인 위치 캐시 기법)

  • Han, Youn-Hee;Song, Ui-Sung;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.253-264
    • /
    • 2002
  • Recently, hierarchical architectures of databases for location management have been proposed in order to accommodate the increase in user population in future personal communication systems. In particular, a 3-level hierarchical database architecture is compatible with current cellular mobile systems. In the architecture, a newly developed additional databases, regional location database(RLR), are positioned between HLR and VLRs. We propose an efficient cache scheme, called the Double T-thresholds Location Cache Scheme. The cache scheme extends the existing T-threshold location cache scheme which is competent only under 2-level architecture of location databases currently adopted by IS-41 and GSM. The idea behind our scheme is to use two pieces of cache information, VLR and RLR serving called portables. The two pieces are required in order to exploit root only locality of registration area(RA) but also locality of regional registration area(RRA) which is the wide area covered by RLR. We also use two threshold values in order to determine whether the two pieces are obsolete. In order to model the RRA residence time, the branching Eralng-$\infty$ distribution is introduced. Our minute cost analysis shows that the double T-threshold location cache scheme yields significant reduction of network and database costs for molt patterns of portables.

Hybrid Buffer Replacement Scheme Considering Reference Pattern in Multimedia Storage Systems (멀티미디어 저장 시스템에서 참조 유형을 고려한 혼성 버퍼 교체 기법)

  • 류연승
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.47-56
    • /
    • 2002
  • Previous buffer cache schemes for multimedia storage systems only exploited the sequential references of multimedia files and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a new buffer replacement scheme, called HBM(Hybrid Buffer Management), for multimedia storage systems that have both sequential and looping references. Proposed scheme assumes that application layer informs reference pattern of files to file system. Then HBM applies an appropriate replacement policy to each file. Our simulation experiments show that HBM outperforms previous buffer cache schemes such as DISTANCE and LRU.

  • PDF

2Q-CFP: A Client Cache Management Scheme for Broadcast-based Information Systems (2Q-CFP: 방송에 기초한 정보 시스템을 위한 클라이언트 캐쉬 관리 기법)

  • 권혁민
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.561-572
    • /
    • 2003
  • Broadcast-based data delivery has attracted a lot of attention as an efficient way of disseminating data to very large client populations. The main motivation of broadcast-based information systems (BBISs) is that the number of clients that they serve can grow arbitrarily large without any effect on their performance. The performance of BBISs depends mainly on client caching strategies and on data broadcast scheduling mechanisms. This paper addresses the former issue and proposes a new client cache management scheme, named 2Q-CFP, that is suitable to BBISs. This paper also evaluates the performance of 2Q-CFP on the basis of a simulation model. The performance results indicate that 2Q-CFP scheme shows superior performances over GRAY, LRU and CF in the average response time.

Client Cache Management Scheme For Data Broadcasting Environments (LRU-CFP: 데이터 방송 환경을 위한 클라이언트 캐쉬 관리 기법)

  • Kwon, Hyeok-Min
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.961-970
    • /
    • 2003
  • In data broadcasting environments, the server periodically broadcasts data items in the broadcast channel. When each client wants to access any data item, it should monitor the broadcast channel and wait for the desired item to arrive. Client data caching is a very effective technique for reducing the time spent waiting for the desired item to be broadcastted. This paper proposes a new client cache management scheme, named LRU-CFP, to reduce this waiting time ans evaluates its performance on the basis of a simulation model. The performance results indicate that LRU-CFP scheme shows superior performance over LRU, GRAY and CF in the average response time.

Analysis of a Cache Management Protocol Using a Back-shifting Approach (백쉬프팅 기법을 이용한 캐쉬 유지 규약의 분석)

  • Cho Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.49-56
    • /
    • 2005
  • To reduce server bottlenecks in client-server computing, each client may have its own cache for later reuse. The pessimistic approach for cache management protocol leads to unnecessary waits, because, it can not be commit a transaction until the transaction obtains all requested locks. In addition, optimistic approach tends to make needless aborts. This paper suggests an efficient optimistic protocol that overcomes such shortcomings. In this paper, we present a simulation-based analysis on the performance of our scheme with other well-known protocols. The analysis was executed under the Zipf workload which represents the popularity distribution on the Web. The simulation experiments show that our scheme performs as well as or better than other schemes with low overhead.

  • PDF

Flash-Aware Transaction Management Scheme for flash Memory Database (플래시 메모리 데이터베이스를 위한 플래시인지 트랜잭션 관리 기법)

  • Byun Si Woo
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.65-72
    • /
    • 2005
  • Flash memories are one of best media to support portable computers in mobile computing environment. The features of non-volatility, low power consumption. and fast access time for read operations are sufficient grounds to support flash memory as major database storage components of portable computers. However. we need to Improve traditional transaction management scheme due to the relatively slow characteristics of flash operation as compared to RAM memory. In order to achieve this goal. we devise a new scheme called flash-aware transaction management (FATM). FATM improves transaction performance by exploiting SRAM and W-Cache, We also propose a simulation model to show the performance of FATM. Based on the results of the performance evaluation, we conclude that FATM scheme outperforms the traditional scheme.

  • PDF