• Title/Summary/Keyword: prefetch

Search Result 77, Processing Time 0.018 seconds

Prefetching Mechanism using the User's File Access Pattern Profile in Mobile Computing Environment (이동 컴퓨팅 환경에서 사용자의 FAP 프로파일을 이용한 선인출 메커니즘)

  • Choi, Chang-Ho;Kim, Myung-Il;Kim, Sung-Jo
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.138-148
    • /
    • 2000
  • In the mobile computing environment, in order to make copies of important files available when being disconnected the mobile host(client) must store them in its local cache while the connection is maintained. In this paper, we propose the prefetching mechanism for the client to save files which may be accessed in the near future. Our mechanism utilizes analyzer, prefetch-list producer, and prefetch manager. The analyzer records file access patterns of the user in a FAP(File Access Patterns) profile. Using the profile, the prefetch-list producer creates the prefetch-list. The prefetch manager requests a file server to return this list. We set the parameter TRP(Threshold of Reference Probability) to ensure that only reasonably related files can be prefetched. The prefetch-list producer adds the files to a prefetch-list if their reference probability is greater than the TRP. We also use the parameter TACP(Threshold of Access Counter Probability) to reduce the hoarding size required to store a prefetch-list. Finally, we measure the metrics such as the cache hit ratio, the number of files referenced by the client after disconnection and the hoarding size. The simulation results show that the performance of our mechanism is superior to that of the LRU caching mechanism. Our results also show that prefetching with the TACP can reduce the hoard size while maintaining similar performance of prefetching without TACP.

  • PDF

T-Tree Index Structures Utilizing Prefetch Methods (프리패치 기법을 적용한 T.트리 인덱스 구조)

  • Lee, Ig-Hoon;Shim, Jun-Ho
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.119-131
    • /
    • 2009
  • During a decade, e-Commerce environments supporting real-time transaction processing have been getting larger. In telecommunication and financial environments, research and building for main memory database systems have been doing to support real-time transaction processing. A research on indexing for fast transaction support focuses on reducing cache misses or reducing memory access latency when cache misses happen. In the paper, we propose a prefetch method for tree index structures to reduce memory access latency. We present a prefetch-efficient pCST-tree and show superiority of the proposed tree by experiments.

  • PDF

A Prefetch Algorithm for a Mobile Host using Association Rules (연관 규칙을 이용한 이동 호스트의 선반입 알고리즘)

  • 김호숙;용환승
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.163-173
    • /
    • 2004
  • Recently, location-based services are becoming very Popular in mobile environments. In this paper, we propose a new association based prefetch algorithm (called by STAP) that efficiently supports information service based on the large quantity of spatial database in mobile environments. We apply the spatial-temporal relations that are meaningful for location-based queries in mobile environments. Moreover, STAP considers user's mobility and the weight of spatial data. The relation of services is a new aspect not considered in previous cache politics. So STAP is the first prefetch algorithm considering the spatial-temporal relations and thus the cache policy begins to gain a new dimension. We evaluate the performance of STAP and prove the efficiency of STAP.

Resolving Memory Bottlenecks in Hardware Accelerators with Data Prefetch

  • Hyein Lee;Jinoo Joung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.1-12
    • /
    • 2024
  • Deep learning with faster and more accurate results requires large amounts of storage space and large computations. Accordingly, many studies are using hardware accelerators for quick and accurate calculations. However, the performance bottleneck is due to data movement between the hardware accelerators and the CPU. In this paper, we propose a data prefetch strategy that can efficiently reduce such operational bottlenecks. The core idea of the data prefetch strategy is to predict the data needed for the next task and upload it to local memory while the hardware accelerator (Matrix Multiplication Unit, MMU) performs a task. This strategy can be enhanced by using a dual buffer to perform read and write operations simultaneously. This reduces latency and execution time of data transfer. Through simulations, we demonstrate a 24% improvement in the performance of hardware accelerators by maximizing parallel processing with dual buffers and bottlenecks between memories with data prefetch.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

A File System for User Special Functions using Speed-based Prefetch in Embedded Multimedia Systems (임베디드 멀티미디어 재생기에서 속도기반 미리읽기를 이용한 사용자기능 지원 파일시스템)

  • Choe, Tae-Young;Yoon, Hyeon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.625-635
    • /
    • 2008
  • Portable multimedia players have some different properties compared to general multimedia file server. Some of those properties are single user ownership, relatively low hardware performance, I/O burst by user special functions, and short software development cycles. Though suitable for processing multiple user requests at a time, the general multimedia file systems are not efficient for special user functions such as fast forwards/backwards. Soml' methods has been proposed to improve the performance and functionality, which the application programs give prediction hints to the file system. Unfortunately, they require the modification of all applications and recompilation. In this paper, we present a file system that efficiently supports user special functions in embedded multimedia systems using file block allocation, buffer-cache, and prefetch. A prefetch algorithm, SPRA (SPeed-based PRefetch Algorithm) predicts the next block using I/O patterns instead of hints from applications and it is resident in the file system, so doesn't affect application development process. From the experimental file system implementation and comparison with Linux readahead-based algorithms, the proposed system shows $4.29%{\sim}52.63%$ turnaround time and 1.01 to 3,09 times throughput in average.

A Study on the Prediction Accuracy Bounds of Instruction Prefetching (명령어 선인출 예측 정확도의 한계에 관한 연구)

  • Kim, Seong-Baeg;Min, Sang-Lyul;Kim, Chong-Sang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.8
    • /
    • pp.719-729
    • /
    • 2000
  • Prefetching aims at reducing memory latency by fetching, in advance, data that are likely to be requested by the processor in a near future. The effectiveness of prefetching is determined by how accurate the prediction on the needed instructions and data is. Most previous studies on prefetching were limited to proposing a particular prefetch scheme and its performance evaluation, paying little attention to theoretical aspects of prefetching. This paper focuses on the theoretical aspects of instruction prefetching. For this purpose, we propose a clairvoyant prefetch model that makes use of perfect history information. Based on this theoretical model, we analyzed upper limits on the prefetch prediction accuracies of the SPEC benchmarks. The results show that the prefetch prediction accuracy is very high when there is no cache. However, as the size of the instruction cache increases, the prefetch prediction accuracy drops drastically. For example, in the case of the spice benchmark, the prefetch prediction accuracy drops from 53% to 39% when the cache size increases from 2Kbyte to 16Kbyte (assuming 16byte block size). These results indicate that as the cache size increases, most localities are captured by the cache and that instruction prefetching based on the information extracted from the references that missed in the cache suffers from prediction inaccuracies

  • PDF

Performance improvement on mobile devices using MVC+Prefetch Controller Pattern (MVC+Prefetch Controller 패턴을 사용한 모바일 기기의 성능향상 기법)

  • Im, Byung-Jai;Lee, Eun-Seok
    • The KIPS Transactions:PartD
    • /
    • v.18D no.3
    • /
    • pp.179-184
    • /
    • 2011
  • Current mobile devices have surpassed its boundaries as a more communication tool to a smart device which provides additional features. These features have supported the smart life of its users, but have reached its limit from low-performance processors and short-battery time. These issues can be resolved b implementing higher performing hardware, but they come with a burden of high cost. This paper introduces a new way of managing computing resources in a mobile device by enhancing the quality of human-computer interaction. The real-speed felt by users are mainly influenced by the time it takes form a user's input to the device to display the completed result on the screen. Since the size of the screen for mobile devices are small, if the processor only fetch data to be used for displaying on screen, the time can be significantly reduced. MVC+Prefetch Controller pattern accomplished this goal by using the minimum amount of data from DB to fetch display and still manages to support high-speed data transfer to achieve seamless display. This idea has been realized by practice using Samsung mobile phone S8500, which demonstrated the superior performance on user's perspective.

Mechanism for Improving Data Rate on PCI 2.2 Interface (PCI 2.2 Data 전송 효율을 향상시키기 위한 메커니즘)

  • 현유진;성광수
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.807-810
    • /
    • 2003
  • The PCI 2.2 spec introduces Delayed Transaction mechanism to improve system performance for target device with slow local bus. But this mechanism has some restriction since target device doesn't know prefetch data size. So, we propose a new mechanism, which target device prefetch exact data on local bus, to improve data rate on PCI or local interface. The simulation results showed that the proposed mechanism more improves system performance than the Delayed Transaction mechanism.

  • PDF

Implementation of a Prefetch method for Secondary Index Scan in MySQL InnoDB Engine (MySQL InnoDB엔진의 Secondary Index Scan을 위한 Prefetch 기능 구현)

  • Hwang, Dasom;Lee, Sang-Won
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.208-212
    • /
    • 2017
  • Flash SSDs have many advantages over the existing hard disks such as energy efficiency, shock resistance, and high I/O throughput. For these reasons, in combination with the emergence of innovative technologies such as 3D-NAND and V-NAND for cheaper cost-per-byte, flash SSDs have been rapidly replacing hard disks in many areas. However, the existing database engines, which have been developed mainly assuming hard disks as the storage, could not fully exploit the characteristics of flash SSDs (e.g. internal parallelism). In this paper, in order to utilize the internal parallelism intrinsic to modern flash SSDs for faster query processing, we implemented a prefetching method using asynchronous input/output as a new functionality for secondary index scans in MySQL InnoDB engine. Compared to the original InnoDB engine, the proposed prefetching-based scan scheme shows three-fold higher performance in the case of 16KB-page sizes, and about 4.2-fold higher performance in the case of 4KB-page sizes.