• Title/Summary/Keyword: caching algorithm

Search Result 106, Processing Time 0.022 seconds

Reducing Outgoing Traffic of Proxy Cache by Using Client-Cluster

  • Kim Kyung-Baek;Park Dae-Yeon
    • Journal of Communications and Networks
    • /
    • v.8 no.3
    • /
    • pp.330-338
    • /
    • 2006
  • Many web cache systems and policies concerning them have been proposed. These studies, however, consider large objects less useful than small objects in terms of performance, and evict them as soon as possible. Even if this approach increases the hit rate, the byte hit rate decreases and the connections occurring over congested links to outside networks waste more bandwidth in obtaining large objects. This paper puts forth a client-cluster approach for improving the web cache system. The client-cluster is composed of the residual resources of clients and utilizes them as exclusive storage for large objects. This proposed system achieves not only a high hit rate but also a high byte hit rate, while reducing outgoing traffic. The distributed hash table (DHT) based peer-to-peer lookup protocol is utilized to manage the client-cluster. With the natural characteristics of this protocol, the proposed system with the client-cluster is self-organizing, fault-tolerant, well-balanced, and scalable. Additionally, the large objects are managed with an index based allocation method, which balances the loads of all clients well. The performance of the cache system is examined via a trace driven simulation and an effective enhancement of the proxy cache performance is demonstrated.

A LFU based on Real-time Producer Popularity in Concent Centric Networks (CCN에서 실시간 생성자 인기도 기반의 LFU 정책)

  • Choi, Jong-Hyun;Kwon, Tea-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1113-1120
    • /
    • 2021
  • Content Central Network (CCN) appeared to improve network efficiency by transforming IP-based network into content name-based network structures. Each router performs caching mechanism to improve network efficiency in the CCN. And the cache replacement policy applied to the CCN router is an important factor that determines the overall performance of the CCN. Therefore various studies has been done relating to cache replacement policy of the CCN. In this paper, we proposed a cache replacement policy that improves the limitations of the LFU policy. The proposal algorithm applies real-time producer popularity-based variables. And through experiments, we proved that the proposed policy shows a better cache hit ratio than existing policies.

Performance of the Finite Difference Method Using Cache and Shared Memory for Massively Parallel Systems (대규모 병렬 시스템에서 캐시와 공유메모리를 이용한 유한 차분법 성능)

  • Kim, Hyun Kyu;Lee, Hyo Jong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.108-116
    • /
    • 2013
  • Many algorithms have been introduced to improve performance by using massively parallel systems, which consist of several hundreds of processors. A typical example is a GPU system of many processors which uses shared memory. In the case of image filtering algorithms, which make references to neighboring points, the shared memory helps improve performance by frequently accessing adjacent pixels. However, using shared memory requires rewriting the existing codes and consequently results in complexity of the codes. Recent GPU systems support both L1 and L2 cache along with shared memory. Since the L1 cache memory is located in the same area as the shared memory, the improvement of performance is predictable by using the cache memory. In this paper, the performance of cache and shared memory were compared. In conclusion, the performance of cache-based algorithm is very similar to the one of shared memory. The complexity of the code appearing in a shared memory system, however, is resolved with the cache-based algorithm.

Advanced Disk Block Caching Algorithm for Disk I/O sub-system (디스크 입출력 서브시스템을 위한 개선된 디스크 블록 캐싱 알고리즘)

  • Jung, Soo-Mok;Rho, Kyung-Taeg
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.6
    • /
    • pp.139-146
    • /
    • 2007
  • A hard disk. which can be classified as an external storage is usually capacious and economical. In spite of the attractive characteristics and efforts on the performance improvement, however, the operation of the hard disk is apparently slower than a processor and the advancement has also been slowly conducted since it is based on mechanical process. On the other hand. the advancement of the processor has been drastically performed as semiconductor technology does. So, disk I/O sub-system becomes bottleneck of computer systems' performance. For this reason. the research on disk I/O sub-system is in progress to improve computer systems' performance. In this paper, we proposed multi-level LRU scheme and then apply it to the computer systems with buffer cache and disk cache. By applying the proposed scheme to computer systems. the average access time to ask blocks can be decreased. The efficiency of the proposed algorithm was verified by simulation results.

  • PDF

Caching Framework for Multimedia (멀티미디어를 위한 캐슁 기술)

  • Kim, Baek-Hyeon;U, Yo-Seop;Kim, Ik-Su
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.507-514
    • /
    • 2001
  • In VOD(Video-on-Demand) system, the real-time interactive service is one of the most important factor to determine the degree of QoS(Quality of Service). In this paper, we propose the head-end system consisted of switching agent and head-end node, which needs to receive the only video stream for multiple users which have requested the same video, to serve the unlimited interactive service which has no service delay and block. The unlimited VCR services can be served by storing the video stream with buffer at client and head-end node. And the proposed algorithm presents the method to enhance the efficiency by buffer, offer the true interactive VOD services to users because all of service requested by clients are processed immediately. In this paper, we implemented the VOD system which has the VCR functions without service delay and block. Simulation results indicate that the proposed algorithm has better performance in the number of service request and time interval.

  • PDF

Design of Web Content Update Algorithm to Reduce Communication Data Consumption using Service Worker and Hash (서비스워커와 해시를 이용한 통신 데이터 소모 감소를 위한 웹 콘텐츠 갱신 알고리즘 설계)

  • Kim, Hyun-gook;Park, Jin-tae;Choi, Moon-Hyuk;Moon, Il-young
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.2
    • /
    • pp.158-165
    • /
    • 2019
  • The existing web page was downloaded and provided to the user every time the user requested the page. Therefore, if the same page is repeatedly requested by the user, only the download for the same resource is repeated. This is a factor that causes unnecessary consumption of data. We focus on reducing data consumption caused by unnecessary requests between users and servers, and improving content delivery speed. Therefore, in this paper, we propose a caching system and an algorithm that can reduce the data consumption while maintaining the latest cache by comparing the hash value using the hash function that can detect the change of the file requested by the user.

Development of a Kernel Thread Web Accelerator (SCALA-AX) (커널 쓰레드 웹가속기(SCALA-AX) 개발)

  • Park, Jong-Gyu;Min, Byung-Jo;Lim, Han-Na;Park, Jang-Hoon;Chang, Whi;Kim, Hag-Bae
    • The KIPS Transactions:PartA
    • /
    • v.9A no.3
    • /
    • pp.327-332
    • /
    • 2002
  • Conventional proxy web cache, which is generally used to caching server, is a content-copy based system. This method focuses on speeding up the phase delivery not improving the webserver performance. However, if immense clients attempt to connect the webserver simultaneously, the proxy web cache cannot achieve the desired result. In this paper, we propose the web accelerator called the SCALA-AX, whitch improves web server performance by accelerating the delivery contents. The SCALA-AX is built in the Linux-based kernel as a kernel modulo and works in combination with the conventional webserver program. The SCALA-AX speeds up the processing rate of the webserver, because it processes the requests using the kernel thread. The SCALA-AX also applies the well-developed cache algorithm to the processing, and thus it obtains the advantage of the caching server without installing additional hardware. A banchmarking test demonstrates that the SCALA-AX improves webserver performance by up to 500% for content delivery.

Improvement of Partial Update for the Web Map Tile Service (실시간 타일 지도 서비스를 위한 타일이미지 갱신 향상 기법)

  • Cho, Sunghwan;Ga, Chillo;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.5
    • /
    • pp.365-373
    • /
    • 2013
  • Tile caching technology is a commonly used method that optimizes the delivery of map imagery across the internet in modern WebGIS systems. However the poor performance of the map tile cache update is one of the major causes that hamper the wider use of this technique for datasets with frequent updates. In this paper, we introduce a new algorithm, namely, Partial Area Cache Update (PACU) that significantly minimizes redundant update of map tiles where the update frequency of source map data is very large. The performance of our algorithm is verified with the cadastral map data of Pyeongtaek of Gyeonggi Province, where approximately 3,100 changes occur in a day among the 331,594 parcels. The experiment results show that the performance of the PACU algorithm is 6.6 times faster than the ESRI ArcGIS SERVER$^{(r)}$. This algorithm significantly contributes in solving the frequent update problem and enable Web Map Tile Services for data that requires frequent update.

Fips : Dynamic File Prefetching Scheme based on File Access Patterns (Fips : 파일 접근 유형을 고려한 동적 파일 선반입 기법)

  • Lee, Yoon-Young;Kim, Chei-Yol;Seo, Dae-Wha
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.7
    • /
    • pp.384-393
    • /
    • 2002
  • A Parallel file system is normally used to support excessive file requests from parallel applications in a cluster system, whereas prefetching is useful for improving the file system performance. This paper proposes a new prefetching method, Fips(dynamic File Prefetching Scheme based on file access patterms), that is particularly suitable for parallel scientific applications and multimedia web services in a parallel file system. The proposed prefetching method introduces a dynamic prefetching scheme to predict data blocks precisely in run-time although the file access patterns are irregular. In addition, it includes an algorithm to determine whether and when the prefetching is performed using the current available I/O bandwidth. Experimental results confirmed that the use of the proposed prefetching policy in a parallel file system produced a higher file system performance.

DNS-based Dynamic Load Balancing Method on a Distributed Web-server System (분산 웹 서버 시스템에서의 DNS 기반 동적 부하분산 기법)

  • Moon, Jong-Bae;Kim, Myung-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.193-204
    • /
    • 2006
  • In most existing distributed Web systems, incoming requests are distributed to servers via Domain Name System (DNS). Although such systems are simple to implement, the address caching mechanism easily results in load unbalancing among servers. Moreover, modification of the DNS is necessary to load considering the server's state. In this paper, we propose a new dynamic load balancing method using dynamic DNS update and round-robin mechanism. The proposed method performs effective load balancing without modification of the DNS. In this method, a server can dynamically be added to or removed from the DNS list according to the server's load. By removing the overloaded server from the DNS list, the response time becomes faster. For dynamic scheduling, we propose a scheduling algorithm that considers the CPU, memory, and network usage. We can select a scheduling policy based on resources usage. The proposed system can easily be managed by a GUI-based management tool. Experiments show that modules implemented in this paper have low impact on the proposed system. Furthermore, experiments show that both the response time and the file transfer rate of the proposed system are faster than those of a pure Round-Robin DNS.