• Title/Summary/Keyword: 2-level data cache

Search Result 24, Processing Time 0.017 seconds

Improving Flash Translation Layer for Hybrid Flash-Disk Storage through Sequential Pattern Mining based 2-Level Prefetching Technique (하이브리드 플래시-디스크 저장장치용 Flash Translation Layer의 성능 개선을 위한 순차패턴 마이닝 기반 2단계 프리패칭 기법)

  • Chang, Jae-Young;Yoon, Un-Keum;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.4
    • /
    • pp.101-121
    • /
    • 2010
  • This paper presents an intelligent prefetching technique that significantly improves performance of hybrid fash-disk storage, a combination of flash memory and hard disk. Since flash memory embedded in a hybrid device is much faster than hard disk in terms of I/O operations, it can be utilized as a 'cache' space to improve system performance. The basic strategy for prefetching is to utilize sequential pattern mining, with which we can extract the access patterns of objects from historical access sequences. We use two techniques for enhancing the performance of hybrid storage with prefetching. One of them is to modify a FAST algorithm for mapping the flash memory. The other is to extend the unit of prefetching to a block level as well as a file level for effectively utilizing flash memory space. For evaluating the proposed technique, we perform the experiments using the synthetic data and real UCC data, and prove the usability of our technique.

Real-time Implementation of MPEG-4 HVXC Encoder and Decoder on Floating Point DSP (부동 소수점 DSP를 이용한 MPEG-4 HVXC 인코더 및 디코더의 실시간 구현)

  • Kang, Kyeong-ok;Na, Hoon;Hong, Jin-Woo;Jeong, Dae-Gwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, we described the real-time implementation effort of MPEG-4 audio HVXC (Harmonic Vector eXcitation Coding) algorithm for very low bitrates, which has target applications from mobile communications to Internet telephony, on current high performance floating point TMS320C6701 DSP. We adopted a hardware structure for real-time operation. In order for software optimization, we used C- and assembly-language level optimizations for time-critical functional codes. Utilizing the internal program memory of the DSP as the program cache, the internal data memory overlap technique and DMA functionality, we could get a goal of realtime operation of HVXC codec both at 2 kbit/s and at 4 kbit/s. For an encoder at 2 kbit/s, the optimization ratio to original code is about 96 %. Finally, we got the subjective quality of MOS 2.45 at 2 kbit/s from an informal quality test.

  • PDF

The Bit-Map Trip Structure for Giga-Bit Forwarding Lookup in High-Speed Routers (고속 라우터의 기가비트 포워딩 검색을 위한 비트-맵 트라이 구조)

  • Oh, Seung-Hyun;Ahn, Jong-Suk
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.262-276
    • /
    • 2001
  • Recently much research for developing forwarding table that support fast router without employing both special hardware and new protocols. This article introduces a new forwarding data structure based on the software to enable forwarding lookup to be penormed at giga-bit speed. The forwarding table is known as a bottleneck of the routers penormance due to its high complexity proportional to the forwarding table size. The recent research that based on the software uses a Patricia trie and its variants. and also uses a hash function with prefix length key and others. The proposed forwarding table structure construct a forwarding table by the bit stream array in which it constructs trie from routing table prefix entries and it represents each pointer pointing the child node and the associated forwarding table entry with one bit The trie structure and routing prefix pointer need a large memory when representing those by linked-list or array. but in the proposed data structure, the needed memory size is small enough since it represents information with one bit. Additionally, by use a lookup method that start searching at desired middle level we can shorten the search path. The introduced data structure. called bit-map trie shows that we can implement a fast forwarding engine on the conventional Pentium processor by reducing the backbone routing table fits into Level 2 cache of Pentium II processor and shortens the searching path. Our experiments to evaluate the performance of proposed method show that this bit-map trie accomplishes 5.7 million lookups per second.

  • PDF

A Bit-Map Trie for the High-Speed Longest Prefix Search of IP Addresses (고속의 최장 IP 주소 프리픽스 검색을 위한 비트-맵 트라이)

  • 오승현;안종석
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.2
    • /
    • pp.282-292
    • /
    • 2003
  • This paper proposes an efficient data structure for forwarding IPv4 and IPv6 packets at the gigabit speed in backbone routers. The LPM(Longest Prefix Matching) search becomes a bottleneck of routers' performance since the LPM complexity grows in proportion to the forwarding table size and the address length. To speed up the forwarding process, this paper introduces a data structure named BMT(Bit-Map Tie) to minimize the frequent main memory accesses. All the necessary search computations in BMT are done over a small index table stored at cache. To build the small index table from the tie representation of the forwarding table, BMT represents a link pointer to the child node and a node pointer to the corresponding entry in the forwarding table with one bit respectively. To improve the poor performance of the conventional tries when their height becomes higher due to the increase of the address length, BMT adopts a binary search algorithm for determining the appropriate level of tries to start. The simulation experiments show that BMT compacts the IPv4 backbone routers' forwarding table into a small one less than 512-kbyte and achieves the average speed of 250ns/packet on Pentium II processors, which is almost the same performance as the fastest conventional lookup algorithms.