• Title/Summary/Keyword: Prefetch

Search Result 77, Processing Time 0.027 seconds

An Efficient Buffer Cache Management Algorithm based on Prefetching (선반입을 이용한 효율적인 버퍼 캐쉬 관리 알고리즘)

  • Jeon, Heung-Seok;Noh, Sam-Hyeok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.5
    • /
    • pp.529-539
    • /
    • 2000
  • This paper proposes a prefetch-based disk buffer management algorithm, which we call W2R (Veighingjwaiting Room). Instead of using elaborate prefetching schemes to decide which blockto prefetch and when, we simply follow the LRU-OBL (One Block Lookahead) approach and prefetchthe logical next block along with the block that is being referenced. The basic difference is that theW2R algorithm logically partitions the buffer into two rooms, namely, the Weighing Room and theWaiting Room. The referenced, hence fetched block is placed in the Weighing Room, while theprefetched logical next block is placed in the Waiting Room. By so doing, we alleviate some inherentdeficiencies of blindly prefetching the logical next block of a referenced block. Specifically, a prefetchedblock that is never used may replace a possibly valuable block and a prefetched block, thoughreferenced in the future, may replace a block that is used earlier than itself. We show through tracedriven simulation that for the workloads and the environments considered the W2R algorithm improvesthe hit rate by a maximum of 23.19 percentage points compared to the 2Q algorithm and a maximumof 10,25 percentage feints compared to the LRU-OBL algorithm.

  • PDF

High Performance Data Cache Memory Architecture (고성능 데이터 캐시 메모리 구조)

  • Kim, Hong-Sik;Kim, Cheong-Ghil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.945-951
    • /
    • 2008
  • In this paper, a new high performance data cache scheme that improves exploitation of both the spatial and temporal locality is proposed. The proposed data cache consists of a hardware prefetch unit and two sub-caches such as a direct-mapped (DM) cache with a large block size and a fully associative buffer with a small block size. Spatial locality is exploited by fetching and storing large blocks into a direct mapped cache, and is enhanced by prefetching a neighboring block when a DM cache hit occurs. Temporal locality is exploited by storing small blocks from the DM cache in the fully associative buffer according to their activity in the DM cache when they are replaced. Experimental results on Spec2000 programs show that the proposed scheme can reduce the average miss ratio by $12.53%\sim23.62%$ and the AMAT by $14.67%\sim18.60%$ compared to the previous schemes such as direct mapped cache, 4-way set associative cache and SMI(selective mode intelligent) cache[8].

Simplified Forensic Analysis Using List of Deleted Files in IoT Envrionment (사물인터넷 환경에서 삭제된 파일의 목록을 이용한 포렌식 분석 간편화)

  • Lim, Jeong-Hyeon;Lee, Keun-Ho
    • Journal of Internet of Things and Convergence
    • /
    • v.5 no.1
    • /
    • pp.35-39
    • /
    • 2019
  • With the rapid development of the information society, the use of digital devices has increased dramatically and the importance of technology for analyzing them has increased. Digital evidence is stored in many places such as Prefetch, Recent, Registry, and Event Log even if the user has deleted it. Therefore, there is a disadvantage that the forensic analyst can not grasp the files used by the user at the beginning. Therefore, in this paper, we propose a method that the RemoveList folder exists so that the user can grasp the information of the deleted file first, and the information about the deleted file is automatically saved by using AES in RemoveList. Through this, it can be expected that the analyst can alleviate the difficulty of initially grasping the user's PC.

Method of estimating the deleted time of applications using Amcache.hve (앰캐시(Amcache.hve) 파일을 활용한 응용 프로그램 삭제시간 추정방법)

  • Kim, Moon-Ho;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.3
    • /
    • pp.573-583
    • /
    • 2015
  • Amcache.hve file is a registry hive file regarding Program Compatibility Assistant, which stores the executed information of applications. With Amcache.hve file, We can know execution path, first executed time as well as deleted time. Since it checks both the first install time and deleted time, Amcache.hve file can be used to draw up the overall timeline of applications when used with the Prefetch files and Iconcache.db files. Amcache.hve file is also an important artifact to record the traces of anti-forensic programs, portable programs and external storage devices. This paper illustrates the features of Amcache.hve file and methods for utilization in digital forensics such as estimation of deleted time of applications.

A Hardware Cache Prefetching Scheme for Multimedia Data with Intermittently Irregular Strides (단속적(斷續的) 불규칙 주소간격을 갖는 멀티미디어 데이타를 위한 하드웨어 캐시 선인출 방법)

  • Chon Young-Suk;Moon Hyun-Ju;Jeon Joongnam;Kim Sukil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.658-672
    • /
    • 2004
  • Multimedia applications are required to process the huge amount of data at high speed in real time. The memory reference instructions such as loads and stores are the main factor which limits the high speed execution of processor. To enhance the memory reference speed, cache prefetch schemes are used so as to reduce the cache miss ratio and the total execution time by previously fetching data into cache that is expected to be referenced in the future. In this study, we present an advanced data cache prefetching scheme that improves the conventional RPT (reference prediction table) based scheme. We considers the cache line size in calculation of the address stride referenced by the same instruction, and enhances the prefetching algorithm so that the effect of prefetching could be maintained even if an irregular address stride is inserted into the series of uniform strides. According to experiment results on multimedia benchmark programs, the cache miss ratio has been improved 29% in average compared to the conventional RPT scheme while the bus usage has increased relatively small amount (0.03%).

Dual Cache Architecture for Low Cost and High Performance

  • Lee, Jung-Hoon;Park, Gi-Ho;Kim, Shin-Dug
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.275-287
    • /
    • 2003
  • We present a high performance cache structure with a hardware prefetching mechanism that enhances exploitation of spatial and temporal locality. Temporal locality is exploited by selectively moving small blocks into the direct-mapped cache after monitoring their activity in the spatial buffer. Spatial locality is enhanced by intelligently prefetching a neighboring block when a spatial buffer hit occurs. We show that the prefetch operation is highly accurate: over 90% of all prefetches generated are for blocks that are subsequently accessed. Our results show that the system enables the cache size to be reduced by a factor of four to eight relative to a conventional direct-mapped cache while maintaining similar performance.

  • PDF

Design and Analysis of a VOD Buffer Scheduler Using a Fixed Prefetch and Media Drop (고정 선반입과 미디어 Drop을 이용한 VOD 버퍼 스케쥴러의 설계 및 분석)

  • 문병철;박규석
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.110-112
    • /
    • 2000
  • VBR로 압축된 멀티미디어 데이터는 비트 변화율이 매우 큰 편차로 변하기 때문에 자원 예약 관리가 매우 힘들다. 따라서 MPEG데이터의 참조 패턴을 오프라인으로 분석한 메타 테이블을 근거로 과부하 구간에서 미리 선반입하여 시스템의 활용율을 높이는 선반입 기법을 사용하고 있으나, 기존의 선반입 기법은 상영실패는 발생하지 않지만 선반입 임계슬롯이 증가하면서 버퍼 점유량이 증가되어 최소의 적재시간과 적재비용을 유지할 수 없다. 따라서 본 논문에서는 선반입 구간을 고정함으로써 적재비용과 적재시간을 일정한 범위 이하로 유지하면서 시스템 자원의 활용율을 높이는 방법을 제안한다. 그리고 고정 선반입 구간을 사용할 경우 발생하는 상영 실패를 GOP내의 B프레임에 한정하며, Drop모듈을 이용하여 미디어 질 저하를 전체 사용자에게 분산시키는 방법을 제안한다.

  • PDF

Recommendation Method for Mobile Contents Service based on Context Data in Ubiquitous Environment (유비쿼터스 환경에서 상황 데이터 기반 모바일 콘텐츠 서비스를 위한 추천 기법)

  • Kwon, Joon Hee;Kim, Sung Rim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.2
    • /
    • pp.1-9
    • /
    • 2010
  • The increasing popularity of mobile devices, such as cellular phones, smart phones, and PDAs, has fostered the need to recommend more effective information in ubiquitous environments. We propose the recommendation method for mobile contents service using contexts and prefetching in ubiquitous environment. The proposed method enables to find some relevant information to specific user's contexts and computing system contexts. The prefetching has been applied to recommend to user more effectively. Our proposed method makes more effective information recommendation. The proposed method is conceptually comprised of three main tasks. The first task is to build a prefetching zone based on user's current contexts. The second task is to extract candidate information for each user's contexts. The final task is prefetch the information considering mobile device's resource. We describe a new recommendation.

An Adaptable Object Prefetch for Enhancing OODBMS Performance (OODBMS 성능향상을 위한 객체 선인출 전략)

  • An, Jeong-Ho;Kim, Hyeong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.2
    • /
    • pp.191-202
    • /
    • 1999
  • 객체지향 데이터베이스에서 객체 접근의 성능은 효율적인 객체 선인출을 통해 이루어질 수 있다. 본 연구에서는 고급의 객체 시맨틱을 사용하지 않고 세그먼트를 단위로 선택적인 객체 선인출을 수행하는 동적 SEOF(Selective Eager Object Fetch)방법을 고안하였다. 본 알고리즘은 객체 인출의 상관 관계와 빈도수를 모두 고려하였으며, 다른 기존의 객체 선인출 방법들과는 달리 시스템의 부하에 따라 선인출의 정도를 동적으로 조정함으로써 클라이언트의 메모리나 스왑 자원을 효율적으로 이용하여 시스템의 성능을 향상시킨다. 또한 제안된 방법은 객체 버퍼의 사용을 제한하여 자원의 고갈을 막을 수 있으며 , 클러스터링의 정도나 데이터베이스의 크기에 대해 효과적으로 대응한다. 본 논문에서는 다양한 다중 클라이언트 환경에서의 시뮬레이션을 통해 제안된 알고리즘의 성능 평가를 실시하였다.

Efficient Back-end Prefetching Scheme in Cluster-based Web Servers (클러스터 기반 웹 서버에서 실제 서버간 효율적인 선인출 기법)

  • 박선영;박도현;이준원;조정완
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.532-534
    • /
    • 2001
  • 인터넷 사용자가 급속히 증가함에 따라 웹 서비스에 대한 사용자 요구도 증가하고 있다. 최근 연구되고 있는 클러스터 기반 웹 서버는 많은 웹 사용자 요구를 안정적으로 처리할 수 있는 기술로 소개되고 있다. 클러스터 기반 웹 서버는 여러 대의 서버 노드로 구성되어 있는데 각 서버 노드에 들어오는 사용자 요구에 관한 자료가 지역 메모리에 없는 경우, 디스크 접근이나 다른 서버 노드로부터의 자료 전송이 필요하다. 본 논문에서는 클러스터 기반 웹 서버에서 서비스 지연을 감소시키기 위한 서버 노드간 자료 선인출 기법을 제안한다. 즉, 사용자 요구가 들어왔을 때, 다음에 요구될 데이터를 예측하고 각 서버의 지역 메모리에 필요한 자료를 미리 읽어 놓음으로 해서 서비스 지연 시간을 감소시키는 것이다. 모의 실험을 통해 본 논문에서 제안하는 세 가지 알고리즘의 성능을 측정한 결과, 각 자료의 접근 확률(access probability)과 사용자 요구 사이의 지연 시간을 고려하는 선인출 알고리즘인 TAP$^2$(Time and Access Probability-based Prefetch) 방법이 가장 좋은 성능을 보였다. 서비스 지연 시간은 각 서버 노드의 지역 메모리 크기를 작게 하였을 경우(8MB)에 약 20.1%정도 감소된다.