• Title/Summary/Keyword: Disk cache

Search Result 108, Processing Time 0.021 seconds

The Load Balancing Destage Algorithm of RAID5 Controller using Reference History (참조 정보를 이용한 RAID5 제어기의 부하 균형 반출 기법)

  • Jang, Yun-Seok;Kim, Bo-Yeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.776-787
    • /
    • 2000
  • Write requests which stored in disk cache of the RAID5 controller should be destaged to disk arrays according to the destage algorithm. As the response performance of the parallel IO request is being hit by the effect of the destage, several destage algorithms have been studied to enhance the performance of he RAID5 controller. Among the destage algorithms, the load balancing destage algorithm has better performance than other destage algorithms when system load is highly increased. But the load balancing destage algorithm gives priority to load balance among the disks in disk arrays, therefore, when some disks are affected by the very heavy system load caused by small data requests, the load balancing destage algorithm cannot enhance the performance of parallel IO requests effectively since it makes effort to maintain the load balance without the benefit of the locality of the write requests. This paper proposes a new RAID5 controller that applied reference-load balancing destage algorithm which decides the destage priority based on the reference history and load distribution of the disks. The simulation results show that RAID5 controller with the reference-load balancing destage algorithm has better performance than previous load balancing destage algorithm.

  • PDF

A Study of Working Algorithm which makes silent Hard Disk Drive (저소음 HDD 구현을 위한 동작 알고리즘에 관한 연구)

  • Byun, Sang-Don;Chung, Kee-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.274-276
    • /
    • 2010
  • Three noise sources are come from note PC system; HDD, System Cooling Fan and ODD. Except HDD, user can accept as an acceptable operating noise. System Fan needs to cooling down the CPU and Chipset and ODD only works when it need to work, therefore user thinks these two noise sources are necessary. For the HDD, some times it makes noise without user HDD access action such as re-organizing cache and defragmentation, user can hardly accept this noise. At the circumstance like as library and the room at the dawn, user can recognize the noise easily so that makes dissatisfaction. I'm studying algorithm and method to reduce HDD noise for user satisfaction.

  • PDF

A Recovery Scheme of Single Node Failure using Version Caching in Database Sharing Systems (데이타베이스 공유 시스템에서 버전 캐싱을 이용한 단일 노드 고장 회복 기법)

  • 조행래;정용석;이상호
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.409-421
    • /
    • 2004
  • A database sharing system (DSS) couples a number of computing nodes for high performance transaction processing, and each node in DSS shares database at the disk level. In case of node failures in DSS, database recovery algorithms are required to recover the database in a consistent state. A database recovery process in DSS takes rather longer time compared with single database systems, since it should include merging of discrete log records in several nodes and perform REDO tasks using the merged lo9 records. In this paper, we propose a two version caching (2VC) algorithm that improves the cache fusion algorithm introduced in Oracle 9i Real Application Cluster (ORAC). The 2VC algorithm can achieve faster database recovery by eliminating the use of merged log records in case of single node failure. Furthermore, it can improve the performance of normal transaction processing by reducing the amount of unnecessary disk force overhead that occurs in ORAC.

The Pre-Service and Post-Transcoding Method for Enhancing the Response Time of Mobile Web Service (모바일 웹 서비스의 응답시간을 향상시키기 위한 선 서비스 후 변환 방법)

  • Kang, Eui-Sun;Park, Dae-Hyuck;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.783-790
    • /
    • 2007
  • One of the particulars to be considered for providing wireless web service with PC web page is the hardware environment between PC and mobile device. It is because time cost is required in producing mobile contents to suit environment of the connected terminal. Therefore, server side should take account of response time and disk capacity of server. Response time is delayed by content conversion and disk capacity need to store various versions about one content. This paper suggests a pre-service and post-transcoding method to provide faster response time for a mobile terminal. The pre-service is to minimize response time by placing the top priority in servicing contents saved in cache as much as possible even if the quality of contents serviced to mobile terminal may be low. After pre-service is provided, the mobile content is transcoded for the terminal later. Performance level of method proposed was compared through experiment and the result of analysis was described.

Design and Implementation of Buffer Cache for EXT3NS File System (EXT3NS 파일 시스템을 위한 버퍼 캐시의 설계 및 구현)

  • Sohn, Sung-Hoon;Jung, Sung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2202-2211
    • /
    • 2006
  • EXT3NS is a special-purpose file system for large scale multimedia streaming servers. It is built on top of streaming acceleration hardware device called Network-Storage card. The EXT3NS file system significantly improves streaming performance by eliminating memory-to-memory copy operations, i.e. sending video/audio from disk directly to network interface with no main memory buffering. In this paper, we design and implement a buffer cache mechanism, called PMEMCACHE, for EXT3NS file system. We also propose a buffer cache replacement method called ONS for the buffer cache mechanism. The ONS algorithm outperforms other existing buffer replacement algorithms in distributed multimedia streaming environment. In EXT3NS with PMEMCACHE, operation is 33MB/sec and random read operation is 2.4MB/sec. Also, the buffer replacement ONS algorithm shows better performance by 600KB/sec than other buffer cache replacement policies. As a result PMEMCACHE and an ONS can greatly improve the performance of multimedia steaming server which should supportmultiple client requests at the same time.

Acceleration Method of RAID Level 5 for DDR-SSD (DDR-SSD를 위한 RAID 레벨 5의 고속화 방법)

  • Gu, Bon-Gen;Kwak, Yun-Sik;Jeong, Seung-Kook;Hwang, Jung-Yeon
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.5
    • /
    • pp.684-690
    • /
    • 2009
  • In this paper, we propose the acceleration method of the DDR-SSD RAID level 5. The DDR-SSD is the storage device of the Next Generation Storage(NGS) system. The DDR-SSD has different characteristics with HDD and Flash SSD. That's why the DDR-SSD RAID level 5 does not provide the best performance when the normal acceleration method is used. In this paper, to accelerate the DDR-SSD RAID level 5 operation, we propose the parity cache and the architecture of the parity cell. The parity cache stores only parity blocks. This acceleration method proposed in this paper reduce the number of the disk access and the overhead of parity operations.

  • PDF

Dynamic Buffer Allocation Scheme for Caching in Realtime Multimedia Systems (실시간 멀티미디어 시스템에서의 캐슁을 위한 동적 버퍼 할당 기법)

  • Kwon, Jin-Baek;Yeom, Heon-Young;Lee, Kyung-Oh
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.4
    • /
    • pp.420-430
    • /
    • 2000
  • Several caching schemes for realtime multimedia systems have been proposed, but they focus only on increasing the hit ratio without providing any means to utilize the saved disk bandwidth due to cache hits. One of the most important metrics in multimedia systems is the number of clients that the systems can service simultaneously guaranteeing Quality of Service(QoS). Preemptive but Safe Interval Caching(PSIC) was proposed as a caching scheme which makes it possible to provide deterministic QoS.. However, it has no ability to adapt to the change of system environments since it has no mechanism to change the cache size. In this paper, we present a new caching scheme, Dynamic Interval Caching(DIC), which maximizes the performance, regardless of the change of system environments, providing hiccup-free service, by managing memory buffers dynamically. And it is demonstrated that DIC allocates buffer cache optimally, by comparing with PSIC through trace-driven simulations.

  • PDF

Data Deduplication Method using PRAM Cache in SSD Storage System (SSD 스토리지 시스템에서 PRAM 캐시를 이용한 데이터 중복제거 기법)

  • Kim, Ju-Kyeong;Lee, Seung-Kyu;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.117-123
    • /
    • 2013
  • In the recent cloud storage environment, the amount of SSD (Solid-State Drive) replacing with the traditional hard disk drive is increasing. Management of SSD for its space efficiency has become important since SSD provides fast IO performance due to no mechanical movement whereas it has wearable characteristics and does not provide in place update. In order to manage space efficiency of SSD, data de-duplication technique is frequently used. However, this technique occurs much overhead because it consists of data chunking, hasing and hash matching operations. In this paper, we propose new data de-duplication method using PRAM cache. The proposed method uses hierarchical hash tables and LRU(Least Recently Used) for data replacement in PRAM. First hash table in DRAM is used to store hash values of data cached in the PRAM and second hash table in PRAM is used to store hash values of data in SSD storage. The method also enhance data reliability against power failure by maintaining backup of first hash table into PRAM. Experimental results show that average writing frequency and operation time of the proposed method are 44.2% and 38.8% less than those of existing data de-depulication method, respectively, when three workloads are used.

PMS : Prefetching Strategy for Multi-level Storage System (PMS : 다단계 저장장치를 고려한 효율적인 선반입 정책)

  • Lee, Kyu-Hyung;Lee, Hyo-Jeong;Noh, Sam-Hyuk
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.1
    • /
    • pp.26-32
    • /
    • 2009
  • The multi-level storage architecture has been widely adopted in servers and data centers. However, while prefetching has been shown as a crucial technique to exploit sequentiality in accesses common for such systems and hide the increasing relative cost of disk I/O, existing multi-level storage studies have focused mostly on cache replacement strategies. In this paper, we show that prefetching algorithms designed for single-level systems may have their limitations magnified when applied to multi-level systems. Overly conservative prefetching will not be able to effectively use the lower-level cache space, while overly aggressive prefetching will be compounded across levels and generate large amounts of wasted prefetch. We design and implement a hierarchy-aware lower-level prefetching strategy called PMS(Prefetching strategy for Multi-level Storage system) that applicable to any upper level prefetching algorithms. PMS does not require any application hints, a priori knowledge from the application or modification to the va interface. Instead, it monitors the upper-level access patterns as well as the lower-level cache status, and dynamically adjusts the aggressiveness of the lower-level prefetching activities. We evaluated the PMS through extensive simulation studies using a verified multi-level storage simulator, an accurate disk simulator, and access traces with different access patterns. Our results indicate that PMS dynamically controls aggressiveness of lower-level prefetching in reaction to multiple system and workload parameters, improving the overall system performance in all 32 test cases. Working with four well-known existing prefetching algorithms adopted in real systems, PMS obtains an improvement of up to 35% for the average request response time, with an average improvement of 16.56% over all cases.

Affinity-based Dynamic Transaction Routing in a Shared Disk Cluster (공유 디스크 클러스터에서 친화도 기반 동적 트랜잭션 라우팅)

  • 온경오;조행래
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.629-640
    • /
    • 2003
  • A shared disk (SD) cluster couples multiple nodes for high performance transaction processing, and all the coupled nodes share a common database at the disk level. In the SD cluster, a transaction routing corresponds to select a node for an incoming transaction to be executed. An affinity-based routing can increase local buffer hit ratio of each node by clustering transactions referencing similar data to be executed on the same node. However, the affinity-based routing is very much non-adaptive to the changes in the system load, and thus a specific node will be overloaded if transactions in some class are congested. In this paper, we propose a dynamic transaction routing scheme that can achieve an optimal balance between affinity-based routing and dynamic load balancing of all the nodes in the SD cluster. The proposed scheme is novel in the sense that it can improve the system performance by increasing the local buffer hit ratio and reducing the buffer invalidation overhead.