• 제목/요약/키워드: In-network Caching

검색결과 169건 처리시간 0.035초

Neighbor Cooperation Based In-Network Caching for Content-Centric Networking

  • Luo, Xi;An, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권5호
    • /
    • pp.2398-2415
    • /
    • 2017
  • Content-Centric Networking (CCN) is a new Internet architecture with routing and caching centered on contents. Through its receiver-driven and connectionless communication model, CCN natively supports the seamless mobility of nodes and scalable content acquisition. In-network caching is one of the core technologies in CCN, and the research of efficient caching scheme becomes increasingly attractive. To address the problem of unbalanced cache load distribution in some existing caching strategies, this paper presents a neighbor cooperation based in-network caching scheme. In this scheme, the node with the highest betweenness centrality in the content delivery path is selected as the central caching node and the area of its ego network is selected as the caching area. When the caching node has no sufficient resource, part of its cached contents will be picked out and transferred to the appropriate neighbor by comprehensively considering the factors, such as available node cache, cache replacement rate and link stability between nodes. Simulation results show that our scheme can effectively enhance the utilization of cache resources and improve cache hit rate and average access cost.

An ICN In-Network Caching Policy for Butterfly Network in DCN

  • Jeon, Hongseok;Lee, Byungjoon;Song, Hoyoung;Kang, Moonsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1610-1623
    • /
    • 2013
  • In-network caching is a key component of information-centric networking (ICN) for reducing content download time, network traffic, and server workload. Data center network (DCN) is an ideal candidate for applying the ICN design principles. In this paper, we have evaluated the effectiveness of caching placement and replacement in DCN with butterfly-topology. We also suggest a new cache placement policy based on the number of routing nodes (i.e., hop counts) through which travels the content. With a probability inversely proportional to the hop counts, the caching placement policy makes each routing node to cache content chunks. Simulation results lead us to conclude (i) cache placement policy is more effective for cache performance than cache replacement, (ii) the suggested cache placement policy has better caching performance for butterfly-type DCNs than the traditional caching placement policies such as ALWASYS and FIX(P), and (iii) high cache hit ratio does not always imply low average hop counts.

저작물의 In-network Caching에 관한 제도적 보완 (Institutional Complement on In-Network Caching of Copyrighted Works)

  • 조은상;황지현;권태경;최양희
    • 한국통신학회논문지
    • /
    • 제37권8C호
    • /
    • pp.703-710
    • /
    • 2012
  • 2011년 12월 2일 일부개정된 저작권법에는 저작물 이용과정에서의 일시적 복제에 관한 조항이 신설되었다. 최근 In-network Caching 기술에 대한 연구가 Content-Centric Networking 등 활발히 이루어지고 있는 가운데 저작물의 일시적 저장, 즉 일시적 복제가 사용자의 단말에서 뿐 아니라 네트워크 장비인 라우터에서도 발생할 수 있으므로 이에 대한 법적, 제도적 검토가 이루어져야 할 필요성이 있다. 이에 본 논문에서는 최근 개정된 저작권법 및 한미FTA 협정문의 관련 조항을 중심으로 저작물의 일시적 복제에 대해 살펴보고, In-network Caching이 실현가능하려면 어떠한 제도가 뒷받침되어야 하는지 고찰하였다.

Dynamic Probabilistic Caching Algorithm with Content Priorities for Content-Centric Networks

  • Sirichotedumrong, Warit;Kumwilaisak, Wuttipong;Tarnoi, Saran;Thatphitthukkul, Nattanun
    • ETRI Journal
    • /
    • 제39권5호
    • /
    • pp.695-706
    • /
    • 2017
  • This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache-hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache-hit ratio, and a cache-miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.

데이터 이름 기반 네트워킹의 데이터 캐싱 관리 기법 (A Data Caching Management Scheme for NDN)

  • 김대엽
    • 한국멀티미디어학회논문지
    • /
    • 제19권2호
    • /
    • pp.291-299
    • /
    • 2016
  • To enhance network efficiency, named-data networking (NDN) implements data caching functionality on intermediate network nodes, and then the nodes directly respond to request messages for cached data. Through the processing of request messages in intermediate node, NDN can efficiently reduce the amount of network traffic, also solve network congestion problems near data sources. Also, NDN provides a data authenticate mechanism so as to prevent various Internet accidents caused from the absence of an authentication mechanism. Hence, through applying NDN to various smart IT convergence services, it is expected to efficiently control the explosive growth of network traffic as well as to provide more secure services. Basically, it is important factors of NDN which data is cached and where nodes caching data is located in a network topology. This paper first analyzes previous works caching content based on the popularity of the content. Then ii investigates the hitting rate of caches in each node of a network topology, and then propose an improved caching scheme based on the result of the analyzation. Finally, it evaluates the performance of the proposal.

Development for a Simple Client-based Distributed Web Caching System

  • Park, Jong-Ho;Chong, Kil-To
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.2131-2136
    • /
    • 2003
  • Since the number of user-requests increases dramatically on the Internet, the servers and networks can be swamped unexpectedly without any prior notice. Therefore, the end-users are waiting or refused for the responses of the contents from the originating servers. To solve this problem, it has been considered that a distributed web caching system efficiently utilizes structural elements of the network. Because a distributed web caching system uses the caches that are close to end-users on the network, it transmits the contents to users faster than the original network system. This paper proposes a simple client-based distributed web caching system(2HRCS) that client can directly perform object allocation and load balancing without an additional DNS for load balancing in CARP (Cache Array Routing Protocol) and GHS (Global Hosting System) that are the recent distributed web caching system protocol. The proposed system reduces the cost of setup and operation by removing DNS that needs to balance the load in the existing system. The system has clients with consistent hashing method, so it extends its environment to other distributed web caching system that has caches of different capacity. A distributed web caching system is composed and tested to evaluate the performance. As a result, it shows superior performance to consistent hashing system. Because this system can keep performance of the existing system and reduce costs, it has the advantage of constructing medium or small scale CDN (Contents Delivery Network).

  • PDF

Performance Impact of Large File Transfer on Web Proxy Caching: A Case Study in a High Bandwidth Campus Network Environment

  • Kim, Hyun-Chul;Lee, Dong-Man;Chon, Kil-Nam;Jang, Beak-Cheol;Kwon, Tae-Kyoung;Choi, Yang-Hee
    • Journal of Communications and Networks
    • /
    • 제12권1호
    • /
    • pp.52-66
    • /
    • 2010
  • Since large objects consume substantial resources, web proxy caching incurs a fundamental trade-off between performance (i.e., hit-ratio and latency) and overhead (i.e., resource usage), in terms of caching and relaying large objects to users. This paper investigates how and to what extent the current dedicated-server based web proxy caching scheme is affected by large file transfers in a high bandwidth campus network environment. We use a series of trace-based performance analyses and profiling of various resource components in our experimental squid proxy cache server. Large file transfers often overwhelm our cache server. This causes a bottleneck in a web network, by saturating the network bandwidth of the cache server. Due to the requests for large objects, response times required for delivery of concurrently requested small objects increase, by a factor as high as a few million, in the worst cases. We argue that this cache bandwidth bottleneck problem is due to the fundamental limitations of the current centralized web proxy caching model that scales poorly when there are a limited amount of dedicated resources. This is a serious threat to the viability of the current web proxy caching model, particularly in a high bandwidth access network, since it leads to sporadic disconnections of the downstream access network from the global web network. We propose a peer-to-peer cooperative web caching scheme to address the cache bandwidth bottleneck problem. We show that it performs the task of caching and delivery of large objects in an efficient and cost-effective manner, without generating significant overheads for participating peers.

이동 환경에서 영역기반의 네트워크 캐슁 효용성 분석 (Analysis of Usefulness of Domain-Based Network Caching in Mobile Environment)

  • 이화세;이승원;박성호;정기동
    • 한국멀티미디어학회논문지
    • /
    • 제7권5호
    • /
    • pp.668-679
    • /
    • 2004
  • 본 연구에서는 이동 환경에서 사용자들이 여러 베이스 스테이션(Base station)을 빠르게 또는 느리게 이동하면서 비디오나 오디오 같은 연속 미디어 서비스를 요구할 때, 이를 위한 캐슁(Caching)이 효용성을 가지는지에 대해서 연구한다. 이동 환경에서 패킷의 단절과 네트워크의 오버헤드의 문제를 줄이고 전송 지연 시간을 최소화하기 위해, 영역(Domain) 기반의 계층적 캐쉬 구조를 제안하고, 이 구조에서 캐슁의 적용이 효용성을 가지는지를 알아본다. 그래서 이동 환경 (Mobile Environment)에서 연속 미디어를 서비스 받는 사용자 환경과 계층적 네트워크 구조를 모델링하고 사용자의 이동성향과 캐슁 위치에 따른 캐슁의 효용성을 분석하여, 캐슁이 이동 환경에서 효용성을 제공하는지 연구한다. 그 결과 캐슁의 위치와 사용자들의 이동성향에 따라 히트율과 재배치 횟수의 변화가 크므로 캐슁의 적응성 있는 적용이 필요함을 알 수 있다.

  • PDF

WLAN 기반 개인형 멀티미디어 캐싱 네트워크 성능 분석 (Performance analysis of private multimedia caching network based on wireless local area network)

  • 반태원;김성환;류종열;이웅섭
    • 한국정보통신학회논문지
    • /
    • 제21권8호
    • /
    • pp.1486-1491
    • /
    • 2017
  • 본 논문에서는 최근 급격히 증가하고 있는 대용량 고화질 멀티미디어 스트리밍 서비스의 품질을 개선하고 코어 네트워크의 트래픽 부담을 경감시킬 수 있는 무선 근거리 네트워크 (wireless local area network: WLAN) 기반의 캐싱 기법을 제안하다. 제안하는 캐싱 방식은 WLAN용 AP에 탑재된 저장 장치에 멀티미디어를 저장한 후 클라이언트의 스트리밍 요청에 따라 인터넷망과의 연결 없이 독자적인 스트리밍 서비스를 제공한다. 실제 상용망을 기반으로 시험망을 구축하여 초당 프레임수와 버퍼링 시간 관점에서 제안 방식의 성능을 측정하였다. 성능 분석 결과에 따르면, 제안하는 캐싱 방식은 기존의 스트리밍 방식 대비 평균 버퍼링 시간을 73.3% 감소시킬 수 있으며, 평균 FPS를 약 71.3% 향상시킬 수 있는 것으로 나타났다.

Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks

  • Yoonjeong, Choi; Yujin, Lim
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.803-812
    • /
    • 2022
  • With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devices close to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizes to be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations (SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multiple SBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate content and increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategy that considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored in the SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of the content. Experiments in various environments show that the proposed caching strategy outperforms the other methods in terms of hit ratio.