• Title/Summary/Keyword: In-network Caching

Search Result 169, Processing Time 0.031 seconds

A Study on the Performance of Prefetching Web Cache Proxy (Prefetch하는 웹 캐쉬 프록시의 성능에 대한 연구)

  • 백윤철
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.11
    • /
    • pp.1453-1464
    • /
    • 2001
  • Explosive growth of Internet populations results performance degradations of web service. Popular web sites cannot provide proper level of services to many requests, and such poor services cannot give user a satisfaction. Web caching, the remedy of this problem, reduces the amount of network traffic and gives fast response to user. In this paper, we analyze the characteristics of web cache traffics using traces of NLANR(National Laboratory for Applied Network Research) root caches and Education network cache in Seoul National University. Based on this analysis, we suggest a method of prefetching and we discuss the gains of our prefetching. As a result, we find proposed prefetching enhances hit rate up to 3%, response time up to 5%.

  • PDF

Dynamic Caching Routing Strategy for LEO Satellite Nodes Based on Gradient Boosting Regression Tree

  • Yang Yang;Shengbo Hu;Guiju Lu
    • Journal of Information Processing Systems
    • /
    • v.20 no.1
    • /
    • pp.131-147
    • /
    • 2024
  • A routing strategy based on traffic prediction and dynamic cache allocation for satellite nodes is proposed to address the issues of high propagation delay and overall delay of inter-satellite and satellite-to-ground links in low Earth orbit (LEO) satellite systems. The spatial and temporal correlations of satellite network traffic were analyzed, and the relevant traffic through the target satellite was extracted as raw input for traffic prediction. An improved gradient boosting regression tree algorithm was used for traffic prediction. Based on the traffic prediction results, a dynamic cache allocation routing strategy is proposed. The satellite nodes periodically monitor the traffic load on inter-satellite links (ISLs) and dynamically allocate cache resources for each ISL with neighboring nodes. Simulation results demonstrate that the proposed routing strategy effectively reduces packet loss rate and average end-to-end delay and improves the distribution of services across the entire network.

Analysis of MANET Protocols Using OPNET (OPNET을 이용한 MANET 프로토콜 분석)

  • Zhang, Xiao-Lei;Wang, Ye;Ki, Jang-Geun;Lee, Kyu-Tae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.87-97
    • /
    • 2009
  • A Mobile Ad hoc Network (MANET) is characterized by multi-hop wireless connectivity, frequently changing network topology with mobile nodes and the efficiency of the dynamic routing protocol plays an important role in the performance of the network. In this paper, the performance of five routing protocols for MANET is compared by using OPNET modeler: AODV, DSR, GRP, OLSR and TORA. The various performance metrics are examined, such as packet delivery ratio, end-to-end delay and routing overhead with varying data traffic, number of nodes and mobility. In our simulation results, OLSR shows the best performance in terms of data delivery ratio in static networks, while AODV has the best performance in mobile networks with moderate data traffic. When comparing proactive protocols (OLSR, GRP) and reactive protocols (AODV, DSR) with varying data traffic in the static networks, proactive protocols consistently presents almost constant overhead while the reactive protocols show a sharp increase to some extent. When comparing each of proactive protocols in static and mobile networks, OLSR is better than GRP in the delivery ratio while overhead is more. As for reactive protocols, DSR outperforms AODV under the moderate data traffic in static networks because it exploits caching aggressively and maintains multiple routes per destination. However, this advantage turns into disadvantage in high mobility networks since the chance of the cached routes becoming stale increases.

  • PDF

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Cache Algorithm in Reverse Connection Setup Protocol(CRCP) for effective Location Management in PCS Network (PCS 네트워크 상에서 효율적인 위치관리를 위한 역방향 호설정 캐쉬 알고리즘(CRCP)에 관한 연구)

  • Ahn, Yun-Shok;An, Seok;Bae, Yun-Jeong;Jo, Jea-Jun;Kim, Jae-Ha;Kim, Byung-Gi
    • Proceedings of the KIEE Conference
    • /
    • 1998.11b
    • /
    • pp.630-632
    • /
    • 1998
  • The basic user location strategies proposed in current PCS(Personal Communication Services) Network are two-level Database strategies. These Databases which exist in the Signalling network always maintain user's current location information, and it is used in call setup process to a mobile user. As the number of PCS users are increasing, this strategies yield some problem such as concentrating signalling traffic on the Database, increasing Call setup Delay, and so on. In this paper, we proposed RCP(Reverse Connection setup Protocol) model, which apply RVC(Reverse Virtual Call setup) algorithm to PCS reference model, and CRCP(Cache algorithm in RCP) model, which adopt Caching strategies in the RCP model. When Cache-miss occur, we found that CRCP model require less miss-penalty than PCS model. Also we show that proposed models are always likely to yield better performance in terms of reduced Location Tracking Delay time.

  • PDF

A Heuristic Algorithm for Optimal Facility Placement in Mobile Edge Networks

  • Jiao, Jiping;Chen, Lingyu;Hong, Xuemin;Shi, Jianghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3329-3350
    • /
    • 2017
  • Installing caching and computing facilities in mobile edge networks is a promising solution to cope with the challenging capacity and delay requirements imposed on future mobile communication systems. The problem of optimal facility placement in mobile edge networks has not been fully studied in the literature. This is a non-trivial problem because the mobile edge network has a unidirectional topology, making existing solutions inapplicable. This paper considers the problem of optimal placement of a fixed number of facilities in a mobile edge network with an arbitrary tree topology and an arbitrary demand distribution. A low-complexity sequential algorithm is proposed and proved to be convergent and optimal in some cases. The complexity of the algorithm is shown to be $O(H^2{\gamma})$, where H is the height of the tree and ${\gamma}$ is the number of facilities. Simulation results confirm that the proposed algorithm is effective in producing near-optimal solutions.

Development of Directed Diffusion Algorithm with Enhanced Performance (향상된 성능을 갖는 Directed Diffusion 알고리즘의 개발)

  • Kim Sung-Ho;Kim Si-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.858-863
    • /
    • 2005
  • Sensor network is subject to novel problems and constraints because it is composed of thousands of tiny devices with very limited resources. The large number of motes in a sensor network means that there will be some failing nodes owing to the lack of battery in sensor nodes. Therefore, it is imperative to save the energy as much as possible. In this work, we propose energy efficient routing algorithm which is based on directed diffusion scheme. In the proposed scheme, some overloads required for reinforcing the gradient path can be effectively eliminated. Furthermore, in order to verify the usefulness of the proposed algorithm, several simulations are executed.

Social-relation Aware Routing Protocol in Mobile Ad hoc Networks (이동 애드 혹 네트워크를 위한 사회적 관계 인식 라우팅 프로토콜)

  • An, Ji-Sun;Ko, Yang-Woo;Lee, Dong-Man
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.8
    • /
    • pp.798-802
    • /
    • 2008
  • In this paper, we consider mobile ad hoc network routing protocols with respect to content sharing applications. We show that by utilizing social relations among participants, our routing protocols can improve its performance and efficiency of caching. Moreover, in certain situation which we can anticipate pattern of user's content consumption, our scheme can help such applications be more efficient in terms of access time and network overhead. Using NS2 simulator, we compare our scheme to DSDV and routing protocol using shortest path algorithm.

Multicast VOD System for Interactive Services in the Head-End-Network (Head-End-Network에서 대화형 서비스를 위한 멀티캐스트 VOD 시스템)

  • Kim, Back-Hyun;Hwang, Tae-June;Kim, Ik-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.361-368
    • /
    • 2004
  • This paper proposes an interactive VOD system to serve truly interactive VCR services using multicast delivery, client buffer and web-caching technique which implements the distributed proxy in Head-End- Network(HNET). This technique adopts some caches in the HNET that consists of a Switching Agent(SA), some Head-End-Nodes(HEN) and many clients. In this model, HENs distributively store the requested video under the control of SA. Also, client buffer dynamically expands to support various VCR playback rate. Thus, interactive services are offered with transmitting video streams from network, HENs and stored streams on buffer. Therefore this technique makes the load of network occur In the limited area, minimizes the additional channel allocation from server and restricts the transmission of duplicated video contents

Game Theoretic Cache Allocation Scheme in Wireless Networks (게임이론 기반 무선 통신에서의 캐시 할당 기법)

  • Le, Tra Huong Thi;Kim, Do Hyeon;Hong, Choong Seon
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.854-859
    • /
    • 2017
  • Caching popular videos in the storage of base stations is an efficient method to reduce the transmission latency. This paper proposes an incentive proactive cache mechanism in the wireless network to motivate the content providers (CPs) to participate in the caching procedure. The system consists of one/many Infrastructure Provider (InP) and many CPs. The InP aims to define the price it charges the CPs to maximize its revenue while the CPs compete to determine the number of files they cache at the InP's base stations (BSs). We conceive this system within the framework of Stackelberg game where InP is considered as the leader and CPs are the followers. By using backward induction, we show closed form of the amount of cache space that each CP renting on each base station and then solve the optimization problem to calculate the price that InP leases each CP. This is different from the existing works in that we consider the non-uniform pricing scheme. The numerical results show that InP's profit in the proposed scheme is higher than in the uniform pricing.