• Title/Summary/Keyword: Level 2 Cache

Search Result 68, Processing Time 0.025 seconds

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

Improving Performance of Internet by Using Hierarchical Proxy Cache (계층적 프록시 캐쉬를 이용한 인터넷 성능 향상 기법)

  • 이효일;김종현
    • Journal of the Korea Society for Simulation
    • /
    • v.9 no.2
    • /
    • pp.1-14
    • /
    • 2000
  • Recently, as construction of information infra including high-speed communication networks remarkably expands, more various information services have been provided. Thus the number of internet users rapidly increases, and it results in heavy load on Web server and higher traffics on networks. The phenomena cause longer response time that means worse quality of service. To solve such problems, much effort has been attempted to loosen bottleneck on Web server, reduce traffic on networks and shorten response times by caching informations being accessed more frequently at the proxy server that is located near to clients. And it is also possible to improve internet performance further by allowing clients to share informations stored in proxy caches. In this paper, we perform simulations of hierarchical proxy caches with the 3-level 4-ary tree structure by using real web traces, and analyze cache hit ratio for various cache replacement policies and cache sizes when the delayed-store scheme is applied. According to simulation results, the delayed-store scheme increases the remote cache hit ratio, that improves quality of service by shortening the service response time.

  • PDF

A New Hybrid Architecture for Cooperative Web Caching

  • Baek, Jin-Suk;Kaur, Gurpreet;Yang, Jung-Hoon
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • An effective solution to the problems caused by the explosive growth of World Wide Web is a web caching that employing an additional server, called proxy cache, between the clients and main server for caching the popular web objects near the clients. However, a single proxy cache can easily become the bottleneck. Deploying groups of cooperative caches provides scalability and robustness by eliminating the limitations caused by a single proxy cache. Two common architectures to implement the cooperative caching are hierarchical and distributed caching systems. Unfortunately, both architectures suffer from performance limitations. We propose an efficient hybrid caching architecture eliminating these limitations by using both the hierarchical and same level caches. Our performance evaluation with our investigated simulator shows that the proposed architecture offers the best of both existing architectures in terms of cache hit rate, the number of query messages from clients, and response time.

  • PDF

An Efficient Location Cache Scheme for 3-level Database Architecture in PCS Networks (PCS 네트워크에서 3-레벨 데이터베이스 구조를 위한 효과적인 위치 캐시 기법)

  • Han, Youn-Hee;Song, Ui-Sung;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.253-264
    • /
    • 2002
  • Recently, hierarchical architectures of databases for location management have been proposed in order to accommodate the increase in user population in future personal communication systems. In particular, a 3-level hierarchical database architecture is compatible with current cellular mobile systems. In the architecture, a newly developed additional databases, regional location database(RLR), are positioned between HLR and VLRs. We propose an efficient cache scheme, called the Double T-thresholds Location Cache Scheme. The cache scheme extends the existing T-threshold location cache scheme which is competent only under 2-level architecture of location databases currently adopted by IS-41 and GSM. The idea behind our scheme is to use two pieces of cache information, VLR and RLR serving called portables. The two pieces are required in order to exploit root only locality of registration area(RA) but also locality of regional registration area(RRA) which is the wide area covered by RLR. We also use two threshold values in order to determine whether the two pieces are obsolete. In order to model the RRA residence time, the branching Eralng-$\infty$ distribution is introduced. Our minute cost analysis shows that the double T-threshold location cache scheme yields significant reduction of network and database costs for molt patterns of portables.

Web Proxy Cache Replacement Algorithms using Object Type Partition (개체 타입별 분할공간을 이용한 웹 프락시 캐시의 대체 알고리즘)

  • Soo-haeng, Lee;Sang-bang, Choi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.5C
    • /
    • pp.399-410
    • /
    • 2002
  • Web cache, which is functionally another word of proxy server, is located between client and server. Web cache has a limited storage area although it has broad bandwidth between client and proxy server, which are usually connected through LAN. Because of limited storage capacity, existing objects in web cache can be deleted for new objects by some rules called replacement algorithm. Hit rate and byte-hit rate are general metrics to evaluate replacement algorithms. Most of the replacement algorithms do satisfy only one metric, or sometimes none of them. In this paper, we propose two replacement algorithms to achieve both high hit rate and byte-hit rate with great satisfaction. In the first algorithm, the cache is appropriately partitioned according to file types as a basic model. In the second algorithm, the cache is composed of two levels; the upper level cache is managed by the basic algorithm, but the lower level is collectively used for all types of files as a shared area. To show the performance of the proposed algorithms, we evaluate hit rate and byte-hit rate of the proposed replacement algorithms using the trace driven simulation.

Fault Tolerant Cache for Soft Error (소프트에러 결함 허용 캐쉬)

  • Lee, Jong-Ho;Cho, Jun-Dong;Pyo, Jung-Yul;Park, Gi-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.1
    • /
    • pp.128-136
    • /
    • 2008
  • In this paper, we propose a new cache structure for effective error correction of soft error. We added check bit and SEEB(soft error evaluation block) to evaluate the status of cache line. The SEEB stores result of parity check into the two-bit shit register and set the check bit to '1' when parity check fails twice in the same cache line. In this case the line where parity check fails twice is treated as a vulnerable to soft error. When the data is filled into the cache, the new replacement algorithm is suggested that it can only use the valid block determined by SEEB. This structure prohibits the vulnerable line from being used and contributes to efficient use of cache by the reuse of line where parity check fails only once can be reused. We tried to minimize the side effect of the proposed cache and the experimental results, using SPEC2000 benchmark, showed 3% degradation in hit rate, 15% timing overhead because of parity logic and 2.7% area overhead. But it can be considered as trivial for SEEB because almost tolerant design inevitably adopt this parity method even if there are some overhead. And if only parity logic is used then it can have $5%{\sim}10%$ advantage than ECC logic. By using this proposed cache, the system will be protected from the threat of soft error in cache and the hit rate can be maintained to the level without soft error in the cache.

A Level One Cache Organization for Chip-Size Limited Single Processor (칩의 크기가 제한된 단일칩 프로세서를 위한 레벨 1 캐시구조)

  • Ju YoungKwan;Kim Sukil
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.127-136
    • /
    • 2005
  • This paper measured a proper ratio of the size of demand fetch cache $L_1$ to that of prefetch cache $L_P$ by imulation when the size of $L_1$ and $L_P$ are constant which organize space-limited level 1 cache of a single microprocessor chip. The analysis of our experiment showed that in the condition of the sum of the size of $L_1$ and $L_P$ are 16 KB, the level 1 cache organization by constituting $L_P$ with 4 KB and employing OBL and FIFO as a prefetch technique and a cache replacement policy respectively resulted in the best performance. Also, this analysis showed that in the condition of the sum of the size of $L_1$ and $L_P$ are over 32 KB, employing dynamic filtering as prefetch technique of $L_P$ are more advantageous and splitting level 1 cache by constituting $L_1$ with 28 KB and $L_P$ with 4 KB in the case of 32 KB of space are available, by constituting $L_1$ with 48 KB and $L_P$ with 16 KB in the case of 64 KB elicited the best performance.

Acceleration of LU-SGS Code on Latest Microprocessors Considering the Increase of Level 2 Cache Hit-Rate (최신 마이크로프로세서에서 2차 캐쉬 적중률 증가를 고려한 LU-SGS 코드의 가속)

  • Choi, J.Y.;Oh, Se-Jong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.7
    • /
    • pp.68-80
    • /
    • 2002
  • An approach for composing a performance optimized computational code is suggested for latest microprocessors. The concept of the code optimization, called here as localization, is maximizing the utilization of the second level cache that is common to all the latest computer system, and minimizing the access to system main memory. In this study, the localized optimization of LU-SGS (Lower-Upper Symmetric Gauss-Seidel) code for the solution of fluid dynamic equations was carried out in three different levels and tested for several different microprocessor architectures most widely used in these days. The test results of localized optimization showed a remarkable performance gain up to 7.35 times faster solution, depending on the system, than the baseline algorithm for producing exactly the same solution on the same computer system.

A Design and Implementation of Cache Coherence Protocol for Hierarchical Cluster Architecture (계층 클러스터 구조를 위한 캐쉬 일관성 프로토콜의 설계 및 구현)

  • 박신민;최창훈;김성천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.7
    • /
    • pp.1282-1295
    • /
    • 1994
  • In this paper, a hierarchical cluster multiprocessor system based on a hierarchical bus system is proposed and its cache coherency protocol is designed and implemented. The hierarchical cluster architecture aims at elimination the system bottleneck of the existing single bus system by adding a hierarchy of buses as the number of clusters is increased. Therefore the system is easy to scale up to a large number of processors. The proposed cache protocol is designed to be adapted to the general N-level (N>2) hierarchical cluster architecture. The original pended protocol is extended to implement the cache protocol on the system bus and cache coherency operations for this protocol are explained.

  • PDF

i$^2$SCSI: Intelligent iSCSI Shared Disk Providing Cache Consistency in Storage Area Network (i$^2$SCSI: Storage Area Network에서 캐시 일관성을 제공하는 지능적인 iSCSI 공유 디스크)

  • 이주평;황주영;임승호;박규호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1327-1330
    • /
    • 2003
  • The internet SCSI(iSCSI) disk has been studied as a storage system which can be directly connected to TCP/IP network. We designed and implemented a shared disk following the iSCSI protocol and providing cache consistency. It is named as intelligent iSCSI(i$^2$SCSI) disk. The i$^2$SCSI disk provides cache consistency of all blocks that belong to the disk using a conventional lease method and it uses 'contiguous blocks-level locking' The prototype of the i$^2$SCSI disk emulator and its client is designed and implemented in Linux 2.4.

  • PDF