• Title/Summary/Keyword: Data Prefetching

Search Result 66, Processing Time 0.026 seconds

A data prefetching scheme to improve response time of Video Streaming service (비디오 스트리밍 응답 시간 개선을 위한 데이터 사전 배치 방법)

  • Min, Ji-won;Mun, Hyun-su;Lee, Young-seok
    • KNOM Review
    • /
    • v.22 no.1
    • /
    • pp.52-59
    • /
    • 2019
  • As the video streaming service are supported by various devices, the amount of usage increases and efforts to improve the service from the viewpoint of users have continued. When a user watches a video, a response time occurs from input to playback, and if this response time becomes longer, the user's service satisfaction decreases. In this paper, we are proposing a method prefetching each user's preference video data obtained by analyzing user's past history record to the device for reducing the response time. We will show the result that prefetching data can improve the response time to 41% at most. And we analyzed real-video streaming viewing record and got each user's preferred video list. We investigated the change of response time according to a hit-ratio and amount of overhead data that was prefetched to the device, but not viewed. It was shown that as the hit-ratio grows bigger, the improvement of response time becomes more effective.

A Prefetching and Memory Management Policy for Personal Solid State Drives (개인용 SSD를 위한 선반입 및 메모리 관리 정책)

  • Baek, Sung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.19A no.1
    • /
    • pp.35-44
    • /
    • 2012
  • Traditional technologies that are used to improve the performance of hard disk drives show many negative cases if they are applied to solid state drives (SSD). Access time and block sequence in hard disk drives that consist of mechanical components are very important performance factors. Meanwhile, SSD provides superior random read performance that is not affected by block address sequence due to the characteristics of flash memory. Practically, it is recommended to disable prefetching if a SSD is installed in a personal computer. However, this paper presents a combinational method of a prefetching scheme and a memory management that consider the internal structure of SSD and the characteristics of NAND flash memory. It is important that SSD must concurrently operate multiple flash memory chips. The I/O unit size of NAND flash memory tends to increase and it exceeded the block size of operating systems. Hence, the proposed prefetching scheme performs in an operating unit of SSD. To complement a weak point of the prefetching scheme, the proposed memory management scheme adaptively evicts uselessly prefetched data to maximize the sum of cache hit rate and prefetch hit rate. We implemented the proposed schemes as a Linux kernel module and evaluated them using a commercial SSD. The schemes improved the I/O performance up to 26% in a given experiment.

Dynamical Polynomial Regression Prefetcher for DRAM-PCM Hybrid Main Memory (DRAM-PCM 하이브리드 메인 메모리에 대한 동적 다항식 회귀 프리페처)

  • Zhang, Mengzhao;Kim, Jung-Geun;Kim, Shin-Dug
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.20-23
    • /
    • 2020
  • This research is to design an effective prefetching method required for DRAM-PCM hybrid main memory systems especially used for big data applications and massive-scale computing environment. Conventional prefetchers perform well with regular memory access patterns. However, workloads such as graph processing show extremely irregular memory access characteristics and thus could not be prefetched accurately. Therefore, this research proposes an efficient dynamical prefetching algorithm based on the regression method. We have designed an intelligent prefetch engine that can identify the characteristics of the memory access sequences. It can perform regular, linear regression or polynomial regression predictive analysis based on the memory access sequences' characteristics, and dynamically determine the number of pages required for prefetching. Besides, we also present a DRAM-PCM hybrid memory structure, which can reduce the energy cost and solve the conventional DRAM memory system's thermal problem. Experiment result shows that the performance has increased by 40%, compared with the conventional DRAM memory structure.

Design of A Media Processor Equipped with Dual Cache (복수 캐시로 구성한 미디어 프로세서의 설계)

  • Moon, Hyun-Ju;Jeon, Joong-Nam;Kim, Suk-Il
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.10
    • /
    • pp.573-581
    • /
    • 2002
  • In this paper, we propose a mediaprocessor of dual-cache architecture which is composed of the multimedia data cache and the general-purpose data cache to prevent performance degradation caused by memory delay. In the proposed processor architecture, multimedia data that are written in subword instructions are loaded in the multimedia data cache and the remaining data are loaded in the general-purpose data cache. Also, Ive use multi-block prefetching scheme that fetches two consecutive data blocks into a cache at a time to exploit the locality of multimedia data. Experimental results on MPEG and JPEG benchmark programs show that the proposed processor architecture results in better performance than the processor equipped with single data cache.

An Area Efficient Low Power Data Cache for Multimedia Embedded Systems (멀티미디어 내장형 시스템을 위한 저전력 데이터 캐쉬 설계)

  • Kim Cheong-Ghil;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.101-110
    • /
    • 2006
  • One of the most effective ways to improve cache performance is to exploit both temporal and spatial locality given by any program executional characteristics. This paper proposes a data cache with small space for low power but high performance on multimedia applications. The basic architecture is a split-cache consisting of a direct-mapped cache with small block sire and a fully-associative buffer with large block size. To overcome the disadvantage of small cache space, two mechanisms are enhanced by considering operational behaviors of multimedia applications: an adaptive multi-block prefetching to initiate various fetch sizes and an efficient block filtering to remove rarely reused data. The simulations on MediaBench show that the proposed 5KB-cache can provide equivalent performance and reduce energy consumption up to 40% as compared with 16KB 4-way set associative cache.

Design of Caching Scheme for Mobile Underground Geospatial Information Map System (모바일용 지하공간정보지도 관리 시스템에서 응답속도 향상을 위한 캐싱 기법)

  • Kim, Yong-Tae;Kouh, Hoon-Joon
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.7-14
    • /
    • 2022
  • Unlike general maps, the underground geospatial Information is a system made to view underground information in a 3D shape. This system is managed by a tile maps to lighten the data. But there are various underground structures in the basement, and the structures are made of 3D data, so the data size is large. Therefore, when a client mobile program requests a tile map, the service server fetches the requested tile map from the DB server and transmits ti to the client, but there is a transmission delay time problem. In this paper, we design the tile cache method to improve the request response speed for the tile map data provided to the client in the mobile underground geospatial information system. We propose a method in which a service server predicts and prefetchs the next tile map while the client is viewing tile map, and stores the prefetching data in the memory of client mobile terminal. Then, the transmission delay time problem can be solved.

Data Prefetching and Streaming for Improving the Performance of Mapreduce of Hadoop (하둡 맵리듀스 성능 향상을 위한 데이터 프리패칭과 스트리밍)

  • Lee, Jung June;Kim, Kyung Tae;Youn, Hee Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.151-154
    • /
    • 2015
  • 최근 소셜 네트워크, 바이오 컴퓨팅, 사물 인터넷 등의 출현으로 인해 기존의 IT환경보다 많은 데이터가 생성되고 있고, 이로 인해 효율적인 대용량 데이터 처리기법에 대한 연구가 진행 되고 있다. 맵리듀스는 데이터 집약적인 연산 어플리케이션에 효과적인 프로그래밍 모델로써, 대표적인 맵리듀스 어플리케이션으로는 아파치 소프트웨어 재단에서 개발 지원중인 하둡이 있다. 본 논문은 하둡 맵리듀스의 성능 향상을 위해 데이터 프리패칭 기법과 스트리밍 기법을 제안한다. 하둡 맵리듀스의 성능 이슈 중 하나는 맵리듀스 과정에서 입력 데이터 전송에 의한 작업 지연이다. 이러한 데이터 전송 시간을 최소화하기 위해, 기존 맵리듀스와는 달리 데이터 전송을 담당하는 프리패칭 스레드를 별도로 생성하였다. 그 결과 데이터의 맵리듀스 작업 중에도 데이터 전송이 가능하게 되어 전체 데이터 처리 시간을 줄일 수 있었다. 이러한 프리패칭 기법을 사용해도 하둡 맵리듀스의 특성상 최초 데이터 전송 시에는 작업대기를 하게 되는데, 이 대기시간을 줄이고자 스트리밍 기법을 사용하여 데이터 전송에 의한 대기시간을 추가로 줄일 수 있었다. 제안하는 기법의 성능을 측정하기 위해 수학적인 모델링을 하였으며, 성능 측정결과 기존의 하둡 맵리듀스 및 프리패칭 기법만 적용된 맵리듀스 보다 스트리밍 기법이 추가 적용된 맵리듀스의 성능이 향상되었음을 확인 할 수 있었다.

  • PDF

A Study of the Improvement of Execution Speed and Loading of Java Card Program by applying prefetching LRU-OBL Buffer Technique (선반입 LRU-OBL 버퍼 기법을 적용한 자바 카드 프로그램 적재 및 실행 속도 개선에 관한 연구)

  • Oh, Se-Won;Choi, Won-Ho;Jung, Min-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.9
    • /
    • pp.1197-1208
    • /
    • 2007
  • These days, most of SMART card, JAVA card, picked up the JAVA Card Platform gets the position as a standard. Java Card technology provides implantation, platform portability and high security function to SMART Card. Compared to normal Smart Card, JAVA card has a defect that is a low running speed caused by a distinctive feature of JAVA programming language. Factors that affect JAVA Card execution speed are the method how to save the data and install the applets of JAVA Card installation instrument. In this paper, I will offer the plan to improve JAVA Card program's loading and execution speed. At Java Card program, writing, updating and deleting process for data at EEPROM can be improved of Java Card speed by using high speed RAM. For this, at JAVA Card as a application of RAM, I will present prefetching LRU-ORL Buffer Cache Technique that is suitable for Java Card environment. As a data character, managing all data created from JAVA Curd at Buffer Cache, decrease times of recording at maximum for EEPROM so that JAVA Card program upload and execution speed will be improved.

  • PDF

Transmission and Rendering of Massive Terrain Data in Network Environment (네트웍 환경에서의 대규모 지형 데이터 전송 및 렌더링)

  • 김대성;한정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.184-186
    • /
    • 2003
  • 본 논문에서는 대규모 지형 데이터를 이용한 네트웍 환경에서의 지형 탐색을 위한 다중 해상도 기법과 prefetching 기법을 제안한다. 지형 렌더링에 널리 사용되는 직각이등변 삼각형 메쉬 형태의 DEM 데이터를 정삼각형 메친 데이터로 재구성한 뒤, 이를 다중 해상도로 구조화하여. 네트웍 환경에서의 주요 문제점인 대역폭과 지연 문제를 보완하였다. 본 기법은 3차원 지형 데이터를 이용한 온라인 게임 등에 응용될 수 있다.

  • PDF

Location-Based Services for Dynamic Range Queries

  • Park Kwangjin;Song Moonbae;Hwang Chong-Sun
    • Journal of Communications and Networks
    • /
    • v.7 no.4
    • /
    • pp.478-488
    • /
    • 2005
  • To conserve the usage of energy, indexing techniques have been developed in a wireless mobile environment. However, the use of interleaved index segments in a broadcast cycle increases the average access latency for the clients. In this paper, we present the broadcast-based location dependent data delivery scheme (BBS) for dynamic range queries. In the BBS, broadcasted data objects are sorted sequentially based on their locations, and the server broadcasts the location dependent data along with an index segment. Then, we present a data prefetching and caching scheme, designed to reduce the query response time. The performance of this scheme is investigated in relation to various environmental variables, such as the distributions of the data objects, the average speed of the clients, and the size of the service area.