• Title/Summary/Keyword: Disk cache

Search Result 108, Processing Time 0.03 seconds

A Design and Implementation on Large Data File Management Using Buffer Cache and Virtual Memory File (버퍼 캐쉬와 가상메모리 파일을 이용한 대형 데이터화일의 처리방법 설계 및 구현)

  • 김병철;신병석;조동섭;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.7
    • /
    • pp.784-792
    • /
    • 1992
  • In this paper we design and implement a method for application programs to allow handling of large data files in DOS environment. In this method we use extended memory and hard disk as a data buffer. And we use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows us some speed improvement for the application program.

  • PDF

Forgetting based File Cache Management Scheme for Non-Volatile Memory (데이터 망각을 활용한 비휘발성 메모리 기반 파일 캐시 관리 기법)

  • Kang, Dongwoo;Choi, Jongmoo
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.972-978
    • /
    • 2015
  • Non-volatile memory (NVM) supports both byte addressability and non-volatility. These characteristics make it feasible for NVM to be employed at any layer of the memory hierarchy such as cache, memory and disk. An interesting characteristic of NVM is that, even though it supports non-volatility, its retention capability is limited. Furthermore NVM has tradeoff between its retention capability and write latency. In this paper, we propose a novel NVM-based file cache management scheme that makes use of the limited retention capability to improve the cache performance. Experimental results with real-workloads show that our scheme can reduce access latency by up to 31% (24.4% average) compared with the conventional LRU based cache management scheme.

CC-GiST: A Generalized Framework for Efficiently Implementing Arbitrary Cache-Conscious Search Trees (CC-GiST: 임의의 캐시 인식 검색 트리를 효율적으로 구현하기 위한 일반화된 프레임워크)

  • Loh, Woong-Kee;Kim, Won-Sik;Han, Wook-Shin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.1 s.111
    • /
    • pp.21-34
    • /
    • 2007
  • According to recent rapid price drop and capacity growth of main memory, the number of applications on main memory databases is dramatically increasing. Cache miss, which means a phenomenon that the data required by CPU is not resident in cache and is accessed from main memory, is one of the major causes of performance degradation of main memory databases. Several cache-conscious trees have been proposed for reducing cache miss and making the most use of cache in main memory databases. Since each cache-conscious tree has its own unique features, more than one cache-conscious tree can be used in a single application depending on the application's requirement. Moreover, if there is no existing cache-conscious tree that satisfies the application's requirement, we should implement a new cache-conscious tree only for the application's sake. In this paper, we propose the cache-conscious generalized search tree (CC-GiST). The CC-GiST is an extension of the disk-based generalized search tree (GiST) [HNP95] to be tache-conscious, and provides the entire common features and algorithms in the existing cache-conscious trees including pointer compression and key compression techniques. For implementing a cache-conscious tree based on the CC-GiST proposed in this paper, one should implement only a few functions specific to the cache-conscious tree. We show how to implement the most representative cache-conscious trees such as the CSB+-tree, the pkB-tree, and the CR-tree based on the CC-GiST. The CC-GiST eliminates the troublesomeness caused by managing mire than one cache-conscious tree in an application, and provides a framework for efficiently implementing arbitrary cache-conscious trees with new features.

In-depth Analysis and Performance Improvement of a Flash Disk-based Matrix Transposition Algorithm (플래시 디스크 기반 행렬전치 알고리즘 심층 분석 및 성능개선)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.6
    • /
    • pp.377-384
    • /
    • 2017
  • The scope of the matrix application is so broad that it can not be limited. A typical matrix application area in computer science is image processing. Particularly, radar scanning equipment implemented on a small embedded system requires real-time matrix transposition for image processing, and since its memory size is small, a general matrix transposition algorithm can not be applied. In this case, matrix transposition must be done in disk space, such as flash disk, using a limited memory buffer. In this paper, we analyze and improve a recently published flash disk-based matrix transposition algorithm named as asymmetric sub-matrix transposition algorithm. The performance analysis shows that the asymmetric sub-matrix transposition algorithm has lower performance than the conventional sub-matrix transposition algorithm, but the improved asymmetric sub-matrix transposition algorithm is superior to the sub-matrix transposition algorithm in 13 of the 16 experimental data.

A User-Level File System for Streaming Media Caching (스트리밍 미디어 캐슁을 위한 사용자 수준 화일 시스템)

  • Oh, Jae-Hak;Cha, Ho-Jeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.8
    • /
    • pp.472-483
    • /
    • 2002
  • This paper presents the design and implementation of a cache file system, umcFS, which is specifically designed to provide an efficient caching and transmission of streaming media. The proposed file system is based on the concept of file disk and implemented as an application level on top of a general-purpose file system. The file disk favors the continuity of cached media and provides an efficient I/O mechanism for cache server. umcFS statically allocates control blocks as well as media cache blocks. These blocks are referenced by the single-level indirect management structure. As the file system is designed as an application level, it is easy to develop and port to other systems. The performance of the implemented system shows that umcFS performs about 13% better than the native file system in randomly accessing the cache blocks of 1024KB.

A Cache Consistency Control for B-Tree Indices in a Database Sharing System (데이타베이스 공유 시스템에서 B-트리 인덱스를 위한 캐쉬 일관성 제어)

  • On, Gyeong-O;Jo, Haeng-Rae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.593-604
    • /
    • 2001
  • A database sharing system (DSS) refers to a system for high performance transaction processing. In the DSS, the processing nodes are coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches data pages and index pages in its memory buffer. In general, B-tree index pages are accessed more often and thus cached at more processing nodes, than their corresponding data pages. There are also complicated operations in the B-tree such as Fetch, Fetch Next, Insertion and Deletion. Therefore, an efficient cache consistency scheme supporting high level concurrency is required. In this paper, we propose cache consistency schemes using identifiers of index pages and page_LSN of leaf page. The propose schemes can improve the system throughput by reducing the required message traffic between nodes and index re-traversal.

  • PDF

Disk Cache Operating Strategy Using Hints in Disk Drive (++디스크 드라이브 레벨에서 힌트정보를 이용한 디스크 캐쉬 운영 방안)

  • 조재동;장태무
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10c
    • /
    • pp.27-29
    • /
    • 2000
  • 마이크로 프로세서의 동작 속도와 디스크 액세스 속도의 성능 차이는 컴퓨터 시스템의 성능을 제한하는 중요한 요인 주의 하나로 지적되고 있다. 이러한 격차를 줄이는 기술로 디스크 캐쉬의 운영이 연구되어 왔고 디스크 캐쉬 성능 개선 방법으로 선인출이 널리 연구되어 왔다. 본 논문에서는 디스크 드라이브 상에 구현된 캐쉬에서 디스크 요청에 대한 성격적 유형을 힌트로 이용한 선인출 적용방법을 제안하고, 제안된 방법의 유효성은 시뮬레이션 방식으로 입증하였으며 적응적으로 변경된 선인출 적용 방법이 성능의 개선을 이룰 수 있음을 보였다.

  • PDF

A study on High-availability Disk Cache Manager for distributed shared-disk (분산 공유 디스크를 위한 고 가용성 디스크 캐쉬 관리자에 관한 연구)

  • Jin, Kwang-Youn;Han, Pan-Am
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.985-988
    • /
    • 2001
  • 본 연구는 다중 PC 노드로 연결된 클러스터링 시스템을 위해 마이크로 커널을 기반으로한 분산 운영체제를 탑제하여 운영되는 DCM의 고 가용성과 공유 디스크의 입출력 수행 속도를 향상시키는 데 있다. 마이크로 커널에서 제공된 서로 다른 디스크를 메시지 패싱 기법을 이용하여 입출력 성능 향상과 디스크 상의 자료 무결성을 보장할 목적으로 고 가용성의 DCM를 설계 및 구현하여 시스템의 생산성을 향상시킨다.

  • PDF

A Study on Large Data File Management Using Buffer Cache and Virtual Memory File (가상메모리 화일과 버퍼캐쉬를 이용한 대형 데이타 화일의 처리에 관한 연구)

  • Kim, Byeong-Chul;Shin, Byeong-Seok;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1991.11a
    • /
    • pp.185-188
    • /
    • 1991
  • In this paper we have designed and implemented a method of using extended memory and hard disk space as a data buffer for application programs to allow handling of large data files in DOS environment. We use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows some speed improvement for the application program. We have also implemented a number of functions to allow easier handling of pointer operations used by application programs.

  • PDF

Low-power Buffer Cache Management for Mixed HDD and SSD Storage Systems (HDD와 SSD의 혼합형 저장 시스템을 위한 절전형 버퍼 캐쉬 관리)

  • Kang, Hyo-Jung;Park, Jun-Seok;Koh, Kern;Bahn, Hyo-Kyung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.462-466
    • /
    • 2010
  • A new buffer cache management scheme that aims at reducing power consumption in mixed HDD and NAND flash memory storage systems is presented. The proposed scheme reduces power consumption by considering different energy-consumption rate of storage devices, I/O operation type (read or write), and reference potential of cached blocks in terms of both recency and frequency. Simulation shows that the proposed scheme reduces power consumption by 18.0% on average and up to 58.9%.