• Title/Summary/Keyword: caching performance

Search Result 280, Processing Time 0.029 seconds

Cache Management Method for Query Forwarding Optimization in the Grid Database (그리드 데이터베이스에서 질의 전달 최적화를 위한 캐쉬 관리 기법)

  • Shin, Soong-Sun;Jang, Yong-Il;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.13-25
    • /
    • 2007
  • A cache is used for optimization of query forwarding in the Grid database. To decrease network transmission cost, frequently used data is cached from meta database. Existing cache management method has a unbalanced resource problem, because it doesn't manage replicated data in each node. Also, it increases network cost by cache misses. In the case of data modification, if cache is not updated, queries can be transferred to wrong nodes and it can be occurred others nodes which have same cache. Therefore, it is necessary to solve the problems of existing method that are using unbalanced resource of replica and increasing network cost by cache misses. In this paper, cache management method for query forwarding optimization is proposed. The proposed method manages caches through cache manager. To optimize query forwarding, the cache manager makes caching data from lower loaded replicated node. The query processing cost and the network cost will decrease for the reducing of wrong query forwarding. The performance evaluation shows that proposed method performs better than the existing method.

  • PDF

A Efficient Contents Verification Scheme for Distributed Networking/Data Store (분산 환경에서의 효율적인 콘텐츠 인증 기술)

  • Kim, DaeYoub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.4
    • /
    • pp.839-847
    • /
    • 2015
  • To seamlessly provide content through the Internet, it is generally considered to use distributed processing for content requests converged on original content providers like P2P, CDN, and ICN. That is, after other nodes temporally save content, they handle content requests instead of original content providers. However, in this case, it may be possible that a content sender is different from the original provider of the content. In this case, users may be exposed to various risks. To solve such a problem, it is highly recommended to verify received contents before using them, but it can cause network traffic increases as well as a serious service delay. This paper proposes an efficient content verification scheme for distributed networking/data store environments and analyzes its performance.

A Multicast-Based Handover Scheme for the IEEE WAVE Networks (IEEE WAVE 네트워크를 위한 멀티캐스트 기반 핸드오버 기법)

  • Lee, Hyuk-Joon;Yoon, Seok-Young;Lee, Sang-Joon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.2
    • /
    • pp.112-121
    • /
    • 2011
  • The IEEE WAVE standard specification does not support handover operation since it is designed to transmit mainly the ITS-related messages that are limited in length. More advanced multimedia applications such as Internet browsing and streaming of video clips produced by CCTVs, however, require handover support such that a sequence of data packets can be received seamlessly while an OBU's association with the RSUs changes. This paper presents a new handover scheme that can operate without performance degradation in the cases where there are multiple RSUs in the areas of handover by making use of the IEEE 802.11f IAPP Move-notify messages, based on the fast handover scheme with proactive caching by disassociation messages introduced previously. Experimental results from the simulation shows that the proposed handover scheme outperforms the scheme based solely on multicast.

Caching Framework for Multimedia (멀티미디어를 위한 캐슁 기술)

  • Kim, Baek-Hyeon;U, Yo-Seop;Kim, Ik-Su
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.507-514
    • /
    • 2001
  • In VOD(Video-on-Demand) system, the real-time interactive service is one of the most important factor to determine the degree of QoS(Quality of Service). In this paper, we propose the head-end system consisted of switching agent and head-end node, which needs to receive the only video stream for multiple users which have requested the same video, to serve the unlimited interactive service which has no service delay and block. The unlimited VCR services can be served by storing the video stream with buffer at client and head-end node. And the proposed algorithm presents the method to enhance the efficiency by buffer, offer the true interactive VOD services to users because all of service requested by clients are processed immediately. In this paper, we implemented the VOD system which has the VCR functions without service delay and block. Simulation results indicate that the proposed algorithm has better performance in the number of service request and time interval.

  • PDF

I/O Scheme of Hybrid Hard Disk Drive for Low Power Consumption and Effective Response Time (저전력과 응답시간 향상을 위한 하이브리드 하드디스크의 입출력 기법)

  • Kim, Jeong-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.23-31
    • /
    • 2011
  • Recently, Solid state disk is mainly used because this device has lower power consumption as well as higher response time. But it features higher price and lower performance at delete and write operations compared with HDD. To compensate this defect, Hybrid hard disk with internal non-volatile flash memory was issued. This NVCache is used as a kind of cache for disk blocks. In this paper, an I/O scheme for H-HDD is proposed for improving low power consumption as well as response time. Our method is to use this NVCache as read cache mainly and write cache when write requests are concentrated. In read cache operation, disk blocks with higher priority determined on basis of time as well as spatial localities are prefetched, which can improve response time. The write operation is conducted only at write peak time as disk spindle up costs higher battery power as well as response time. Experiments results show that the suggested method can improve response time of H-HDD and lower the power consumption.

An Address Translation Technique Large NAND Flash Memory using Page Level Mapping (페이지 단위 매핑 기반 대용량 NAND플래시를 위한 주소변환기법)

  • Seo, Hyun-Min;Kwon, Oh-Hoon;Park, Jun-Seok;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.371-375
    • /
    • 2010
  • SSD is a storage medium based on NAND Flash memory. Because of its short latency, low power consumption, and resistance to shock, it's not only used in PC but also in server computers. Most SSDs use FTL to overcome the erase-before-overwrite characteristic of NAND flash. There are several types of FTL, but page mapped FTL shows better performance than others. But its usefulness is limited because of its large memory footprint for the mapping table. For example, 64MB memory space is required only for the mapping table for a 64GB MLC SSD. In this paper, we propose a novel caching scheme for the mapping table. By using the mapping-table-meta-data we construct a fully associative cache, and translate the address within O(1) time. The simulation results show more than 80 hit ratio with 32KB cache and 90% with 512KB cache. The overall memory footprint was only 1.9% of 64MB. The time overhead of cache miss was measured lower than 2% for most workload.

Trend and Improvement for Privacy Protection of Future Internet (미래 인터넷 기술의 Privacy 보호 기술 동향 및 개선)

  • Kim, DaeYoub
    • Journal of Digital Convergence
    • /
    • v.14 no.6
    • /
    • pp.405-413
    • /
    • 2016
  • To solve various problems of the Internet as well as to enhance network performance, various future Internet architectures utilize cached data in network nodes or in proxy servers. Named-data networking (NDN), one of future Internet architectures, implements in-network data caching functionality, and then responds itself to request messages. However, it can cause users' privacy invasion that the publisher of data can not engage in the sharing/using process of the data anymore after the data was cached in-network. So NDN implements both encryption-based access control and group access control. But, since such access control schemes need to exchange additional messages in order to search for a proper access control list and keys, it causes inefficiency. This paper surveys the access control schemes of NDN, and then proposes an improved scheme.

An Adaptive Cache Replacement Policy for Web Proxy Servers (웹 프락시 서버를 위한 적응형 캐시 교체 정책)

  • Choi, Seung-Lak;Kim, Mi-Young;Park, Chang-Sup;Cho, Dae-Hyun;Lee, Yoon-Joon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.346-353
    • /
    • 2002
  • The explosive increase of World Wide Web usage has incurred significant amount of network traffic and server load. To overcome these problems, web proxy caching replicates frequently requested documents in the web proxy closer to the users. Cache utilization depends on the replacement policy which tries to store frequently requested documents in near future. Temporal locality and Zipf frequency distribution, which are commonly observed in web proxy workloads, are considered as the important properties to predict the popularity of documents. In this paper, we propose a novel cache replacement policy, called Adaptive LFU (ALFU), which incorporates 1) Zipf frequency distribution by utilizing LFU and 2) temporal locality adaptively by measuring the amount of the popularity reduction of documents as time passed efficiently. We evaluate the performance of ALFU by comparing it to other policies via trace-driven simulation. Experimental results show that ALFU outperforms other policies.

Efficient Schemes for Cache Consistency Maintenance in a Mobile Database System (이동 데이터베이스 시스템에서 효율적인 캐쉬 일관성 유지 기법)

  • Lim, Sang-Min;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.221-232
    • /
    • 2001
  • Due to rapid advance of wireless communication technology, demand on data services in mobile environment is gradually increasing. Caching at a mobile client could reduce bandwidth consumption and query response time, and yet a mobile client must maintain cache consistency. It could be efficient for the server to broadcast a periodic cache invalidation report for cache consistency in a cell. In case that long period of disconnection prevents a mobile client from checking validity of its cache based solely on the invalidation report received, the mobile client could request the server to check cache validity. In doing so, some schemes may be more efficient than others depending on the number of available channels and the mobile clients involved. In this paper, we propose new cache consistency schemes, effects, efficient especially (1) when channel capacity is enough to deal with the mobile clients involved or (2) when that is not the case, and evaluate their performance.

  • PDF

Fault Tolerant Data Aggregation for Reliable Data Gathering in Wireless Sensor Networks (무선센서네트워크에서 신뢰성있는 데이터수집을 위한 고장감내형 데이터 병합 기법)

  • Baek, Jang-Woon;Nam, Young-Jin;Jung, Seung-Wan;Seo, Dae-Wha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9B
    • /
    • pp.1295-1304
    • /
    • 2010
  • This paper proposes a fault-tolerant data aggregation which provides energy efficient and reliable data collection in wireless sensor networks. The traditional aggregation scheme does not provide the countermeasure to packet loss or the countermeasure scheme requires a large amount of energy. The proposed scheme applies caching and re-transmission based on the track topology to the adaptive timeout scheduling. The proposed scheme uses a single-path routing based on the traditional tree topology at normal, which reduces the dissipated energy in sensor nodes without any countermeasure against packet loss. The proposed scheme, however, retransmits the lost packet using track topology under event occurrences in order to fulfill more accurate data aggregation. Extensive simulation work under various workloads has revealed that the proposed scheme decrease by 8% in terms of the dissipated energy and enhances data accuracy 41% when the potential of event occurrence exists as compared with TAG data aggregation. And the proposed scheme decrease by 53% in terms of the dissipated energy and shows a similar performance in data accuracy when the potential of event occurrence exists as compared with PERLA data aggregation.