• Title/Summary/Keyword: Network Cache

Search Result 270, Processing Time 0.026 seconds

A Study on Secure Cooperative Caching Technique in Wireless Ad-hoc Network (Wireless Ad-hoc Network에서 보안 협력 캐싱 기법에 관한 연구)

  • Yang, Hwan Seok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.3
    • /
    • pp.91-98
    • /
    • 2013
  • Node which plays the role of cache server does not exist in the wireless ad-hoc network consisting of only mobile nodes. Even if it exists, it is difficult to provide cache services due to the movement of nodes. Therefore, the cooperative cache technique is necessary in order to improve the efficiency of information access by reducing data access time and use of bandwidth in the wireless ad-hoc network. In this paper, the whole network is divided into zones which don't overlap and master node of each zone is elected. General node of each zone has ZICT and manages cache data to cooperative cache and gateway node use NZCT to manage cache information of neighbor zone. We proposed security structure which can accomplish send and receive in the only node issued id key in the elected master node in order to prepare for cache consistent attack which is vulnerability of distributed caching techniques. The performance of the proposed method in this paper could confirm the excellent performance through comparative experiments of GCC and GC techniques.

Design and Implementation of an SCI-Based Network Cache Coherent NUMA System for High-Performance PC Clustering (고성능 PC 클러스터 링을 위한 SCI 기반 Network Cache Coherent NUMA 시스템의 설계 및 구현)

  • Oh Soo-Cheol;Chung Sang-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.716-725
    • /
    • 2004
  • It is extremely important to minimize network access time in constructing a high-performance PC cluster system. For PC cluster systems, it is possible to reduce network access time by maintaining network cache in each cluster node. This paper presents a Network Cache Coherent NUMA (NCC-NUMA) system to utilize network cache by locating shared memory on the PCI bus, and the NCC-NUMA card which is core module of the NCC-NUMA system is developed. The NCC-NUMA card is directly plugged into the PCI slot of each node, and contains shared memory, network cache, shared memory control module and network control module. The network cache is maintained for the shared memory on the PCI bus of cluster nodes. The coherency mechanism between the network cache and the shared memory is based on the IEEE SCI standard. According to the SPLASH-2 benchmark experiments, the NCC-NUMA system showed improvements of 56% compared with an SCI-based cluster without network cache.

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

WWW Cache Replacement Algorithm Based on the Network-distance

  • Kamizato, Masaru;Nagata, Tomokazu;Taniguchi, Yuji;Tamaki, Shiro
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.238-241
    • /
    • 2002
  • With the popularity of utilization of the Internet among people, the amount of data in the network rapidly increased. So that, the fall of response time from WWW server, which is caused by the network traffic and the burden on m server, has become more of an issue. This problem is encouraged the rearch by redundancy of requesting the same pages by many people, even though they browse the same the ones. To reduce these redundancy, WWW cache server is used commonly in order to store m page data and reuse them. However, the technical uses of WWW cache that different from CPU and Disk cache, is known for its difficulty of improving the cache hit rate. Consecuently, it is difficult to choose effective WWW data to be stored from all data flowing through the WWW cache server. On the other hand, there are room for improvement in commonly used cache replacement algorithms by WWW cache server. In our study, we try to realize a WWW cache server that stresses on the improvement of the stresses of response time. To this end, we propose the new cache replacement algorithm by focusing on the utilizable information of network distance from the WWW cache server to WWW server that possessing the page data of the user requesting.

  • PDF

An ICN In-Network Caching Policy for Butterfly Network in DCN

  • Jeon, Hongseok;Lee, Byungjoon;Song, Hoyoung;Kang, Moonsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1610-1623
    • /
    • 2013
  • In-network caching is a key component of information-centric networking (ICN) for reducing content download time, network traffic, and server workload. Data center network (DCN) is an ideal candidate for applying the ICN design principles. In this paper, we have evaluated the effectiveness of caching placement and replacement in DCN with butterfly-topology. We also suggest a new cache placement policy based on the number of routing nodes (i.e., hop counts) through which travels the content. With a probability inversely proportional to the hop counts, the caching placement policy makes each routing node to cache content chunks. Simulation results lead us to conclude (i) cache placement policy is more effective for cache performance than cache replacement, (ii) the suggested cache placement policy has better caching performance for butterfly-type DCNs than the traditional caching placement policies such as ALWASYS and FIX(P), and (iii) high cache hit ratio does not always imply low average hop counts.

A Network-path based Web Cache Algorithm (네트워크 경로에 기초한 웹 캐쉬 알고리즘)

  • Min, Gyeong-Hun;Jang, Hyeok-Su
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2161-2168
    • /
    • 2000
  • Since most of the existing web cache structures are static, they cannot support the dynamic request change of he current WWW users well. Users re generally using multiple programs in several different windows with rapid preference change within a relatively short period of time. We develop a network-path based algorithm. It organizes a cache according to the network path of the requested URLs and build a network cache farm where caches are logically connected with each other and each cache has its own preference over certain network paths. The algorithm has been implemented and tested in a real site. The performance results show that the new algorithm outperforms the existing static algorithms in the hit ratio and response time dramatically.

  • PDF

Design and Performance of a CC-NUMA Prototype Card for SCI-Based PC Clustering (SCI 기반 PC 클러스터링을 위한 CC-NUMA 프로토타입 카드의 설계와 성능)

  • Oh, Soo-Cheol;Chung, Sang-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.1
    • /
    • pp.35-41
    • /
    • 2002
  • It is extremely important to minimize network access time in constructing a high-performance PC cluster system For an SCI based PC cluster it is possilbe to reduce the network access time by maintaining network cache in each cluster node, This paper presents a CC-NUMA card that utilizes network cache for SCI based PC clustering The CC-NUMA card is directly plugged into the PCI solot of each node, and contains shared memory network cache, and interconnection modules. The network cache is maintained for the shared memory on the PCI bus of cluster nodes. The coherency mechanism between the network cache and the shared memory is based on the IEEE SCI standard. A CC-NUMA prototype card is developed to evaluate the performance of the system. According to the experiments. the cluster system with the CC-NUMA card showed considerable improvements compared with an SCI based clustser without network cache.

Continuous Media Delivery Scheme using Proxy Cache over WAN(Wide Area Network) (WAN(Wide Area Network)상에서 Proxy Cache를 이용한 연속미디어 전송 기법)

  • 최태욱;박성호;김영주;정기동
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.379-381
    • /
    • 2000
  • 인터넷과 같은 WAN환경에서 멀티미디어 전송품질을 향상시키기 위해서 Proxy Cache가 많이 이용되고 있으며, 이에 기반하여 클라이언트에게 실시간으로 연속미디어를 서비스하기 위한 여러 가지 전송기법들이 요구되어지고 있다. 본 논문에서는 Proxy Cache안에 일정양의 메모리를 Relay Buffer로 할당하고 이를 이용한 흐름제어기법을 구현하며, RTSP서버의 전송스케쥴러의 idle time을 이용하는 재전송기법을 제안하고 구현한다. 실험 결과, 제안된 기법은 WAN상에서의 RTSP서버와 Proxy Cache사이 패킷손실량 현저하게 줄일 수 있음을 확인하였다.

  • PDF

Cache Management Method for Query Forwarding Optimization in the Grid Database (그리드 데이터베이스에서 질의 전달 최적화를 위한 캐쉬 관리 기법)

  • Shin, Soong-Sun;Jang, Yong-Il;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.13-25
    • /
    • 2007
  • A cache is used for optimization of query forwarding in the Grid database. To decrease network transmission cost, frequently used data is cached from meta database. Existing cache management method has a unbalanced resource problem, because it doesn't manage replicated data in each node. Also, it increases network cost by cache misses. In the case of data modification, if cache is not updated, queries can be transferred to wrong nodes and it can be occurred others nodes which have same cache. Therefore, it is necessary to solve the problems of existing method that are using unbalanced resource of replica and increasing network cost by cache misses. In this paper, cache management method for query forwarding optimization is proposed. The proposed method manages caches through cache manager. To optimize query forwarding, the cache manager makes caching data from lower loaded replicated node. The query processing cost and the network cost will decrease for the reducing of wrong query forwarding. The performance evaluation shows that proposed method performs better than the existing method.

  • PDF

Request Deduplication Scheme in Cache-Enabled 5G Network Using PON

  • Jung, Bokrae
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.100-105
    • /
    • 2020
  • With the advent of the 5G era, the rapid growth in demand for mobile content services has increased the need for additional backhaul investment. To meet this demand, employing a content delivery network (CDN) and optical access solution near the last mile has become essential for the configuration of 5G networks. In this paper, a cache-enabled architecture using the passive optical network (PON) is presented to serve video on demand (VoD) for users. For efficient use of mobile backhaul, I propose a request deduplication scheme (RDS) that can provide all the requested services missed in cache with minimum bandwidth by eliminating duplicate requests for movies within tolerable range of the quality of service (QoS). The performance of the proposed architecture is compared with and without RDS in terms of the number of requests arriving at the origin server (OS), hit ratio, and improvement ratio according to user requests and cache sizes.