• Title/Summary/Keyword: In-network Caching

Search Result 169, Processing Time 0.029 seconds

Neighbor Cooperation Based In-Network Caching for Content-Centric Networking

  • Luo, Xi;An, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2398-2415
    • /
    • 2017
  • Content-Centric Networking (CCN) is a new Internet architecture with routing and caching centered on contents. Through its receiver-driven and connectionless communication model, CCN natively supports the seamless mobility of nodes and scalable content acquisition. In-network caching is one of the core technologies in CCN, and the research of efficient caching scheme becomes increasingly attractive. To address the problem of unbalanced cache load distribution in some existing caching strategies, this paper presents a neighbor cooperation based in-network caching scheme. In this scheme, the node with the highest betweenness centrality in the content delivery path is selected as the central caching node and the area of its ego network is selected as the caching area. When the caching node has no sufficient resource, part of its cached contents will be picked out and transferred to the appropriate neighbor by comprehensively considering the factors, such as available node cache, cache replacement rate and link stability between nodes. Simulation results show that our scheme can effectively enhance the utilization of cache resources and improve cache hit rate and average access cost.

An ICN In-Network Caching Policy for Butterfly Network in DCN

  • Jeon, Hongseok;Lee, Byungjoon;Song, Hoyoung;Kang, Moonsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1610-1623
    • /
    • 2013
  • In-network caching is a key component of information-centric networking (ICN) for reducing content download time, network traffic, and server workload. Data center network (DCN) is an ideal candidate for applying the ICN design principles. In this paper, we have evaluated the effectiveness of caching placement and replacement in DCN with butterfly-topology. We also suggest a new cache placement policy based on the number of routing nodes (i.e., hop counts) through which travels the content. With a probability inversely proportional to the hop counts, the caching placement policy makes each routing node to cache content chunks. Simulation results lead us to conclude (i) cache placement policy is more effective for cache performance than cache replacement, (ii) the suggested cache placement policy has better caching performance for butterfly-type DCNs than the traditional caching placement policies such as ALWASYS and FIX(P), and (iii) high cache hit ratio does not always imply low average hop counts.

Institutional Complement on In-Network Caching of Copyrighted Works (저작물의 In-network Caching에 관한 제도적 보완)

  • Cho, Eun-Sang;Hwang, Ji-Hyun;Kwon, Ted Tae-Kyoung;Choi, Yang-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.703-710
    • /
    • 2012
  • The new article, related to temporary copy on exploitation of copyrighted works, has been introduced in the copyright law as partly revised on December 2, 2011. While number of researches on in-network caching including Content-Centric Networking are conducted quite actively in recent years, the need for legal and institutional considerations has arisen since temporal storage (i.e. temporal copy) may be made not only at user devices but also in routers such as network equipments. This paper examines issues on temporary copy of copyrighted works mainly focusing on the articles and the related articles of the recently revised copyright law as well as the Free Trade Agreement between the Republic of Korea and the United States of America and further studies necessary institutions required to actualize in-network caching.

Dynamic Probabilistic Caching Algorithm with Content Priorities for Content-Centric Networks

  • Sirichotedumrong, Warit;Kumwilaisak, Wuttipong;Tarnoi, Saran;Thatphitthukkul, Nattanun
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.695-706
    • /
    • 2017
  • This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache-hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache-hit ratio, and a cache-miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.

A Data Caching Management Scheme for NDN (데이터 이름 기반 네트워킹의 데이터 캐싱 관리 기법)

  • Kim, DaeYoub
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.291-299
    • /
    • 2016
  • To enhance network efficiency, named-data networking (NDN) implements data caching functionality on intermediate network nodes, and then the nodes directly respond to request messages for cached data. Through the processing of request messages in intermediate node, NDN can efficiently reduce the amount of network traffic, also solve network congestion problems near data sources. Also, NDN provides a data authenticate mechanism so as to prevent various Internet accidents caused from the absence of an authentication mechanism. Hence, through applying NDN to various smart IT convergence services, it is expected to efficiently control the explosive growth of network traffic as well as to provide more secure services. Basically, it is important factors of NDN which data is cached and where nodes caching data is located in a network topology. This paper first analyzes previous works caching content based on the popularity of the content. Then ii investigates the hitting rate of caches in each node of a network topology, and then propose an improved caching scheme based on the result of the analyzation. Finally, it evaluates the performance of the proposal.

Development for a Simple Client-based Distributed Web Caching System

  • Park, Jong-Ho;Chong, Kil-To
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2131-2136
    • /
    • 2003
  • Since the number of user-requests increases dramatically on the Internet, the servers and networks can be swamped unexpectedly without any prior notice. Therefore, the end-users are waiting or refused for the responses of the contents from the originating servers. To solve this problem, it has been considered that a distributed web caching system efficiently utilizes structural elements of the network. Because a distributed web caching system uses the caches that are close to end-users on the network, it transmits the contents to users faster than the original network system. This paper proposes a simple client-based distributed web caching system(2HRCS) that client can directly perform object allocation and load balancing without an additional DNS for load balancing in CARP (Cache Array Routing Protocol) and GHS (Global Hosting System) that are the recent distributed web caching system protocol. The proposed system reduces the cost of setup and operation by removing DNS that needs to balance the load in the existing system. The system has clients with consistent hashing method, so it extends its environment to other distributed web caching system that has caches of different capacity. A distributed web caching system is composed and tested to evaluate the performance. As a result, it shows superior performance to consistent hashing system. Because this system can keep performance of the existing system and reduce costs, it has the advantage of constructing medium or small scale CDN (Contents Delivery Network).

  • PDF

Performance Impact of Large File Transfer on Web Proxy Caching: A Case Study in a High Bandwidth Campus Network Environment

  • Kim, Hyun-Chul;Lee, Dong-Man;Chon, Kil-Nam;Jang, Beak-Cheol;Kwon, Tae-Kyoung;Choi, Yang-Hee
    • Journal of Communications and Networks
    • /
    • v.12 no.1
    • /
    • pp.52-66
    • /
    • 2010
  • Since large objects consume substantial resources, web proxy caching incurs a fundamental trade-off between performance (i.e., hit-ratio and latency) and overhead (i.e., resource usage), in terms of caching and relaying large objects to users. This paper investigates how and to what extent the current dedicated-server based web proxy caching scheme is affected by large file transfers in a high bandwidth campus network environment. We use a series of trace-based performance analyses and profiling of various resource components in our experimental squid proxy cache server. Large file transfers often overwhelm our cache server. This causes a bottleneck in a web network, by saturating the network bandwidth of the cache server. Due to the requests for large objects, response times required for delivery of concurrently requested small objects increase, by a factor as high as a few million, in the worst cases. We argue that this cache bandwidth bottleneck problem is due to the fundamental limitations of the current centralized web proxy caching model that scales poorly when there are a limited amount of dedicated resources. This is a serious threat to the viability of the current web proxy caching model, particularly in a high bandwidth access network, since it leads to sporadic disconnections of the downstream access network from the global web network. We propose a peer-to-peer cooperative web caching scheme to address the cache bandwidth bottleneck problem. We show that it performs the task of caching and delivery of large objects in an efficient and cost-effective manner, without generating significant overheads for participating peers.

Analysis of Usefulness of Domain-Based Network Caching in Mobile Environment (이동 환경에서 영역기반의 네트워크 캐슁 효용성 분석)

  • 이화세;이승원;박성호;정기동
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.5
    • /
    • pp.668-679
    • /
    • 2004
  • When users of mobile environments move fast or slow into a number of base stations(BS) and request the services of continuous media data such as video or audio, this study examines what the caching has the usefulness in mobile environments. Namely, to reduce packet disconnections and network overheads in mobile environments and minimize transmission delay time, we propose domain-based hierarchical caching structure and study whether application of caching has the usefulness. So we have a model based on user environments and hierarchical network structure to process continuous media services, and analyze the usefulness of caching which depends on the mobile patterns of user and the locations of caching nodes. And then, we research whether caching offers the usefulness in mobile environments. As the result, we are able to see that an adaptable application of caching is needed because the hit ratio and the number of replacement vary in large according to mobile patterns of user and locations of caching.

  • PDF

Performance analysis of private multimedia caching network based on wireless local area network (WLAN 기반 개인형 멀티미디어 캐싱 네트워크 성능 분석)

  • Ban, Tae-Won;Kim, Seong Hwan;Ryu, Jongyeol;Lee, Woongsup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1486-1491
    • /
    • 2017
  • In this paper, we propose a private multimedia caching scheme based on wireless local area network (WLAN) to improve the quality of service for high capacity and high quality multimedia streaming services which are recently increasing and to reduce the traffic load of core networks. The proposed caching scheme stores multimedia in the storage device mounted on WLAN APs and provides streaming services on its own without Internet connection in accordance with the request from clients. We have implemented a test network based on real commercial networks and measured the performance of the proposed caching scheme in terms of frames per second (FPS) and buffering time. According to the performance measurement results, the proposed caching scheme can reduce the average buffering time by 73.3% compared to the conventional streaming scheme. In addition, the proposed caching scheme can also improve the average FPS by 71.3% compared to the conventional streaming scheme.

Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks

  • Yoonjeong, Choi; Yujin, Lim
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.803-812
    • /
    • 2022
  • With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devices close to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizes to be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations (SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multiple SBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate content and increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategy that considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored in the SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of the content. Experiments in various environments show that the proposed caching strategy outperforms the other methods in terms of hit ratio.