• Title/Summary/Keyword: 캐시 자료구조

Search Result 18, Processing Time 0.029 seconds

Web Traffic Analysis using URL- tree and URL-net (URL- tree와 URL-net를 사용한 인터넷 트래픽 분석)

  • 안광림;김기창
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.486-488
    • /
    • 1998
  • 인터넷 사용의 증가로 인한 정체 현상을 극복하기 위한 방안으로 캐시 서버를 사용하고 있다. 캐시 서버의 보다 효율적인 사용을 위해 할당 네트웍의 트래픽에 대한 이해는 매우 중요하다. 즉, 보다 적극적인 캐싱 전략을 수립하기 위해 트래픽 분석이 선행되어야한다. 본 논문에서는 URL- tree와 URL-net이라는 자료구조를 제안하고, 이것을 이용하여 웹 트래픽 분석을 수행한다. 이러한 자료구조를 통해 웹 트래픽에 존재하는 '참조의 연결성'이라는 성질을 찾을 수 있다. 본 논문에서는 위의 두 자료구조들이 인터넷 트래픽을 분석하는데 어떻게 도움을 주고 그러한 분석이 효율적인 캐싱 전략을 수립하는데 어떻게 사용될 수 있는가를 보여준다.

  • PDF

Design and Evaluation of a Web Cache Architecture for Audio-On-Demand Systems (주문형 오디오 시스템을 위한 웹 캐시 구조의 설계 및 평가)

  • Lee, Tae-Won;Shim, Ma-Ro;Bae, Jin-Uk;Lee, Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.27 no.2
    • /
    • pp.209-215
    • /
    • 2000
  • In the on-demand services like AOD(Audio On Demand) over the internet, existing operating systems cannot serve repeatedly requested data efficiently. This paper proposes a web cache architecture. It predicts the songs to be requested in near future, based on the intervals between the requests in the past on the same song and keeps the songs in the web cache. For the replacement strategy of the web cache, LFRR(Least Frequently Requested Recently) is proposed. LFRR replaces the song that has less probability to be requested in near future. The average of the intervals between the requests in the past and the new request is used as the probability of the requests. It is more likely to be requested in near future as the average is less. The web cache decreases the number of disk access extremely, and support to serve more users with restricted resources. From the simulation result based on the data at the AOD site currently operating, it is shown that the high performance enhancement is achieved.

  • PDF

Web Caching Strategy based on Documents Popularity (선호도 기반 웹 캐싱 전략)

  • Yoo, Hae-Young;Park, Chel
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.530-538
    • /
    • 2002
  • In this paper, we propose a new caching strategy for web servers. The proposed algorithm collects on]y the statistics of the requested file, for example the popularity, when a request arrives. And, at times, only files with higher popularity are cached all together. Because the cache remains unchanged until the cache is made newly, web server can use very efficient data structure for cache to determine whether a file is in the cache or not. This increases greatly tile efficiency of cache manipulation. Furthermore, the experiment that is performed with real log files built by web servers shows that the cache hit ratio and the cache hit ratio are better than those produced by LRU. The proposed algorithm has a drawback such that the cache hit ratio may decrease when the popularity of files that is not in the cache explodes instantaneously. But in our opinion, such explosion happens infrequently, and it is easy to implement the web servers to adapt them to such unusual cases.

Performance Analysis of Caching Instructions on SVLIW Processor and VLIW Processor (SVLIW 프로세서와 VLIW 프로세서의 명령어 캐싱에 따른 성능 분석)

  • Ji, Sung-Hyun;Park, No-Kwang;Kim, Suk-Il
    • Journal of IKEEE
    • /
    • v.1 no.1 s.1
    • /
    • pp.101-110
    • /
    • 1997
  • SVLIW processor architectures can resolve resource collisions and data dependencies between the instructions while scheduling VLIW instructions at run-time. As a result, long NOP word instructions can be removed from the object code produced for the processor. Thus, the occurrence of cache misses on the SVLIW processor would be lesser than that on the same cache size VLIW processor. Less frequent cache misses on the SVLIW processor would incur less frequent memory access, and thus, the total execution cycles to complete an application would be shortened compared with cases on the VLIW processor. Such a feature eventually compromises effects of longer instruction pipeline stages than those of the VLIW processor. In this paper, we formulate and compare two execution cycle models of the two architectures. A simulation results show that the longer memory access cycles when cache miss occurs, the total execution cycles of SVLIW processor would be shorter than those of VLIW processor.

  • PDF

A High-Speed Routing Lookups Using 2-Level Trie (2-Level Trie를 이용한 고속 라우팅 검색)

  • 오승현
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11b
    • /
    • pp.790-793
    • /
    • 2003
  • 라우터의 IP 주소검색은 라우터에 도착한 IP 패킷의 목적지 주소를 이용하여 적절한 출력링크를 검색하고 결정하는 것으로 고속 IP 주소검색은 초고속 라우터 개발에 필수적인 부분이다. 본 논문은 일반 PC에서도 고속의 라우팅 검색이 가능 하도록 2-단계 트라이를 이용하는 트라이 기반의 IP 주소검색 자료구조를 소개한다. 2-단계 트라이는 최소 크기의 포워딩 데이블을 구축, 접근속도가 빠른 캐시 메모리에 저장함으로써 고속의 검색이 지원된다.

  • PDF

Performance Evaluation of Caching in PON-based 5G Fronthaul (PON기반 5G 프론트홀의 캐싱 성능 평가)

  • Jung, Bokrae
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • With the deployment of 5G infrastructure, content delivery network (CDN) will be a key role to provide explosive growing services for the independent media and YouTube which contain high-speed mobile contents. Without a local cache, the mobile backhaul and fronthaul should endure huge burden of bandwidth request for users as the increase number of direct accesses from contents providers. To deal with this issue, this paper fist presents both fronthaul solutions for CDN that use dark fibers and a passive optical network (PON). On top of that, we propose the aggregated content request specialized for PON caching and evaluate and compare its performance to legacy schemes through the simulation. The proposed PON caching scheme can reduce average access time of up to 0.5 seconds, 1/n received request packets, and save 60% of backhaul bandwidth compared to the no caching scheme. This work can be a useful reference for service providers and will be extended to further improve the hit ratio of cache in the future.

Web-Based Distributed Visualization System for Large Scale Geographic Data (대용량 지형 데이터를 위한 웹 기반 분산 가시화 시스템)

  • Hwang, Gyu-Hyun;Yun, Seong-Min;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.6
    • /
    • pp.835-848
    • /
    • 2011
  • In this paper, we propose a client server based distributed/parallel system to effectively visualize huge geographic data. The system consists of a web-based client GUI program and a distributed/parallel server program which runs on multiple PC clusters. To make the client program run on mobile devices as well as PCs, the graphical user interface has been designed by using JOGL, the java-based OpenGL graphics library, and sending the information about current available memory space and maximum display resolution the server can minimize the amount of tasks. PC clusters used to play the role of the server access requested geographic data from distributed disks, and properly re-sample them, then send the results back to the client. To minimize the latency happened in repeatedly access the distributed stored geography data, cache data structures have been maintained in both every nodes of the server and the client.

A Fast IP Lookups using Dynamic Trie Compression (능동적 트라이 압축을 이용한 고속 IP 검색)

  • Oh, Seung-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.10A no.5
    • /
    • pp.453-462
    • /
    • 2003
  • IP address lookup of router searches and decide proper output link using destination address of IP packet that arrie into router. The IP address lookup is essential part in te development of high-speed router needed to high-speed backbone network as one of bottleneck of router performance. This paper introduces DTC data structure that can support gigabit IP address lookup by dynamic trie compression technique that just uses small memory in conventional Pentium CPU. When make a forwarding table by trie compression, the DTC can dynamically select a size of data structure with considering correlation between table's size and searching speed. Also, when compress the prefix trie, DTC makes IP address lookup on the forwarding table of a search on the high speed SRAM cache by minimizing the size of data structure reflecting the structure of the trie. In the experiment result, the DTC data structure recorded performance of maximum $12.5{\times}10^5$ LPS (lookup per second) in conventional Pentium CPU through a dynamic building of most suitable compression over variety of routing tables.

Improvement of Partial Update for the Web Map Tile Service (실시간 타일 지도 서비스를 위한 타일이미지 갱신 향상 기법)

  • Cho, Sunghwan;Ga, Chillo;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.5
    • /
    • pp.365-373
    • /
    • 2013
  • Tile caching technology is a commonly used method that optimizes the delivery of map imagery across the internet in modern WebGIS systems. However the poor performance of the map tile cache update is one of the major causes that hamper the wider use of this technique for datasets with frequent updates. In this paper, we introduce a new algorithm, namely, Partial Area Cache Update (PACU) that significantly minimizes redundant update of map tiles where the update frequency of source map data is very large. The performance of our algorithm is verified with the cadastral map data of Pyeongtaek of Gyeonggi Province, where approximately 3,100 changes occur in a day among the 331,594 parcels. The experiment results show that the performance of the PACU algorithm is 6.6 times faster than the ESRI ArcGIS SERVER$^{(r)}$. This algorithm significantly contributes in solving the frequent update problem and enable Web Map Tile Services for data that requires frequent update.

A Performance Improvement of Linux TCP/IP Stack based on Flow-Level Parallelism in a Multi-Core System (멀티코어 시스템에서 흐름 수준 병렬처리에 기반한 리눅스 TCP/IP 스택의 성능 개선)

  • Kwon, Hui-Ung;Jung, Hyung-Jin;Kwak, Hu-Keun;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.113-124
    • /
    • 2009
  • With increasing multicore system, much effort has been put on the performance improvement of its application. Because multicore system has multiple processing devices in one system, its processing power increases compared to the single core system. However in many cases the advantages of multicore can not be exploited fully because the existing software and hardware were designed to be suitable for single core. When the existing software runs on multicore, its performance improvement is limited by the bottleneck of sharing resources and the inefficient use of cache memory on multicore. Therefore, according as the number of core increases, it doesn't show performance improvement and shows performance drop in the worst case. In this paper we propose a method of performance improvement of multicore system by applying Flow-Level Parallelism to the existing TCP/IP network application and operating system. The proposed method sets up the execution environment so that each core unit operates independently as much as possible in network application, TCP/IP stack on operating system, device driver, and network interface. Moreover it distributes network traffics to each core unit through L2 switch. The proposed method allows to minimize the sharing of application data, data structure, socket, device driver, and network interface between each core. Also it allows to minimize the competition among cores to take resources and increase the hit ratio of cache. We implemented the proposed methods with 8 core system and performed experiment. Experimental results show that network access speed and bandwidth increase linearly according to the number of core.