• Title/Summary/Keyword: cache management

Search Result 212, Processing Time 0.028 seconds

Efficient Cache Management Scheme with Maintaining Strong Data Consistency in a VANET (VANET에서 효율적이며 엄격한 데이터 일관성을 유지하는 캐쉬 관리 기법)

  • Moon, Sung-Hoon;Park, Kwang-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.5
    • /
    • pp.41-48
    • /
    • 2012
  • A Vehicular Ad-hoc Network (VANET) is a vehicular specific type of a mobile ad-hoc network, to provide temporary communications among nearby vehicles. Mobile node of VANET consumes energy and resource with participating in the member of network. In a VANET, data replication and cooperative caching have been used as promising solutions to improve system performance. Existing cooperative caching scheme in a VANET mostly focuses on weak consistency is not always satisfactory. In this paper, we propose an efficient cache management scheme to maintain strong data consistency in a VANET. We make an adaptive scheduling scheme to broadcast Invalidation Report (IR) in order to reduce query delay and communication overhead to maintain strong data consistency. The simulation result shows that our proposed method has a strength in terms of query delay and communication overhead.

Buffer Cache Management based on Nonvolatile Memory to Improve the Performance of Smartphone Storage (스마트폰 저장장치의 성능개선을 위한 비휘발성메모리 기반의 버퍼캐쉬 관리)

  • Choi, Hyunkyoung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.7-12
    • /
    • 2016
  • DRAM is commonly used as a smartphone memory medium, but extending its capacity is challenging due to DRAM's large battery consumption and density limit. Meanwhile, smartphone applications such as social network services need increasingly large memory, resulting in long latency due to additional storage accesses. To alleviate this situation, we adopt emerging nonvolatile memory (NVRAM) as smartphone's buffer cache and propose an efficient management scheme. The proposed scheme stores all dirty data in NVRAM, thereby reducing the number of storage accesses. Moreover, it separately exploits read and write histories of data accesses, leading to more efficient management of volatile and nonvolatile buffer caches, respectively. Trace-driven simulations show that the proposed scheme improves I/O performances significantly.

Improving QoS using Cellular-IP/PRC in Wireless Internet Environment (Cellular-IP/PRC에서 핸드오프 상태 머신에 의한 QoS 개선)

  • Kim Dong-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.6
    • /
    • pp.1302-1308
    • /
    • 2005
  • Propose Cellular-IP/PRC network with united paging and Cellular IP special duality that use roof information administration cache to secure lake acceptance method in wireless Internet environment and QoS in lesser extent cell environment. When speech quality is secured considering increment of interference to receive in case of suppose that proposed acceptance method grooves base radio station capacity of transfer node is plenty, and moat of contiguity cell transfer node was accepted at groove base radio station with a blow, groove base radio station new trench lake acceptance method based on transmission of a message electric power estimate of transfer node be. Do it so that may apply composing PC(Paging Cache) and RC(Routing Cache) that was used to manage paging and router in radio Internet network in integral management and all nodes as one PRC(Paging Router Cache), and add hand off state machine in transfer node so that can manage hand off of transfer node and Roaming state efficiently, and studies so that achieve connection function at node. Analyze benevolent person who influence on telephone traffic in system environment and forecasts each link currency rank and imbalance degree, forecast most close and important lake interception probability and lake falling off probability, GoS(Grade of Service), efficiency of cell capacity in QoS because applies algorithm proposing based on algorithm use gun send-receive electric power that judge by looking downward link whether currency book was limited and accepts or intercept lake and handles and displays QoS performance improvement.

Cache Algorithm in Reverse Connection Setup Protocol(CRCP) for effective Location Management in PCS Network (PCS 네트워크 상에서 효율적인 위치관리를 위한 역방향 호설정 캐쉬 알고리즘(CRCP)에 관한 연구)

  • Ahn, Yun-Shok;An, Seok;Bae, Yun-Jeong;Jo, Jea-Jun;Kim, Jae-Ha;Kim, Byung-Gi
    • Proceedings of the KIEE Conference
    • /
    • 1998.11b
    • /
    • pp.630-632
    • /
    • 1998
  • The basic user location strategies proposed in current PCS(Personal Communication Services) Network are two-level Database strategies. These Databases which exist in the Signalling network always maintain user's current location information, and it is used in call setup process to a mobile user. As the number of PCS users are increasing, this strategies yield some problem such as concentrating signalling traffic on the Database, increasing Call setup Delay, and so on. In this paper, we proposed RCP(Reverse Connection setup Protocol) model, which apply RVC(Reverse Virtual Call setup) algorithm to PCS reference model, and CRCP(Cache algorithm in RCP) model, which adopt Caching strategies in the RCP model. When Cache-miss occur, we found that CRCP model require less miss-penalty than PCS model. Also we show that proposed models are always likely to yield better performance in terms of reduced Location Tracking Delay time.

  • PDF

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

A Reconfigurable, General-purpose DSM-CC Architecture and User Preference-based Cache Management Strategy (재구성이 가능한 범용 DSM-CC 아키텍처와 사용자 선호도 기반의 캐시 관리 전략)

  • Jang, Jin-Ho;Ko, Sang-Won;Kim, Jung-Sun
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.89-98
    • /
    • 2010
  • In current digital broadcasting systems, GEM(Globally Executable MHP)-based middlewares such as MHP(Multimedia Home Platform), OCAP(OpenCable Application Platform), ACAP(Advanced Common Application Platform) are the norm. Despite much of the common characteristics shared, such as MPEG-2 and DSM-CC(Digital Storage Media-Command and Control) protocols, the information and data structures they need are slightly different, which results in incompatibility issues. In this paper, in line with an effort to develop an integrated DTV middleware, we propose a general-purpose, reconfigurable DSM-CC architecture for supporting various standard GEM-based middlewares without code modifications. First, we identify DSM-CC components that are common and thus can be shared by all GEM-based middlewares. Next, the system is provided with middleware-specific information and data structures in the form of XML. Since the XML information can be parsed dynamically at run time, it can be interchanged either statically or dynamically for a specific target middleware. As for the performance issues, the response time and usage frequency of DSM-CC module highly contribute to the performance of STB(Set-Top-Box). In this paper, we also propose an efficient application cache management strategy and evaluate its performance. The performance result has shown that the cache strategy reflecting user preferences greatly helps to reduce response time for executing application.

A User-Level File System for Streaming Media Caching (스트리밍 미디어 캐슁을 위한 사용자 수준 화일 시스템)

  • Oh, Jae-Hak;Cha, Ho-Jeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.8
    • /
    • pp.472-483
    • /
    • 2002
  • This paper presents the design and implementation of a cache file system, umcFS, which is specifically designed to provide an efficient caching and transmission of streaming media. The proposed file system is based on the concept of file disk and implemented as an application level on top of a general-purpose file system. The file disk favors the continuity of cached media and provides an efficient I/O mechanism for cache server. umcFS statically allocates control blocks as well as media cache blocks. These blocks are referenced by the single-level indirect management structure. As the file system is designed as an application level, it is easy to develop and port to other systems. The performance of the implemented system shows that umcFS performs about 13% better than the native file system in randomly accessing the cache blocks of 1024KB.

A Multimedia Data Prefetching Based on 2 Dimensional Block Structure (이차원 블록 구조에 근거한 선인출 기법)

  • Kim, Seok-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1086-1096
    • /
    • 2004
  • In case of a multimedia application which deals with streaming data, in terms of cache management, cache loses its efficiency due to weak temporal locality of the data. This means that when data have been brought into cache, much of the data are supposed to be replaced without being accessed again during its service. However, there is a good chance that such multimedia data has a commanding locality in it. In this paper, to take advantage of the memory reference regularity which typically innates even in the multimedia data showing up its weak temporal locality, a method is suggested. The suggested method with the feature of dynamic regular-stride reference prefetching can identify for 2-dimensional array format(block pattern). The suggested method is named as block-reference-prediction-technique (BRPT) since it identifies a block pattern and place an address to be prefetched by the regulation of the block format. BRPT proved to be reassuring to reduce memory reference time significantly for applications having abundant block patterns although new rule has complicated the prefetching system even further.

  • PDF

An Effective Cache Test Algorithm and BIST Architecture (효율적인 캐쉬 테스트 알고리듬 및 BIST 구조)

  • Kim, Hong-Sik;Yoon, Do-Hyun;Kang, Sing-Ho
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.12
    • /
    • pp.47-58
    • /
    • 1999
  • As the performance of processors improves, cache memories are used to overcome the difference of speed between processors and main memories. Generally cache memories are embedded and small sizes, fault coverage is a more important factor than test time in testing point of view. A new test algorithm and a new BIST architecture are developed to detect various fault models with a relatively small overhead. The new concurrent BIST architecture uses the comparator of cache management blocks as response analyzers for tag memories. A modified scan-chain is used for pre-testing of comparators which can reduce test clock cycles. In addition several boundary scan instructions are provided to control the internal test circuitries. The results show that the new algorithm can detect SAFs, AFs, TFs linked with CFs, CFins, CFids, SCFs, CFdyns and DRFs models with O(12N), where N is the memory size and the new BIST architecture has lower overhead than traditional architecture by about 11%.

  • PDF