• Title/Summary/Keyword: Hybrid cache

Search Result 54, Processing Time 0.025 seconds

CACHE:Context-aware Clustering Hierarchy and Energy efficient for MANET (CACHE:상황인식 기반의 계층적 클러스터링 알고리즘에 관한 연구)

  • Mun, Chang-min;Lee, Kang-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.571-573
    • /
    • 2009
  • Mobile Ad-hoc Network(MANET) needs efficient node management because the wireless network has energy constraints. Mobility of MANET would require the topology change frequently compared with a static network. To improve the routing protocol in MANET, energy efficient routing protocol would be required as well as considering the mobility would be needed. Previously proposed a hybrid routing CACH prolong the network lifetime and decrease latency. However the algorithm has a problem when node density is increase. In this paper, we propose a new method that the CACHE(Context-aware Clustering Hierarchy and Energy efficient) algorithm. The proposed analysis could not only help in defining the optimum depth of hierarchy architecture CACH utilize, but also improve the problem about node density.

  • PDF

Locally weighted linear regression prefetching method for hybrid memory system (하이브리드 메모리 시스템의 지역 가중 선형회귀 프리페치 방법)

  • Tang, Qian;Kim, Jeong-Geun;Kim, Shin-Dug
    • Annual Conference of KIPS
    • /
    • 2020.11a
    • /
    • pp.12-15
    • /
    • 2020
  • Data access characteristics can directly affect the efficiency of the system execution. This research is to design an accurate predictor by using historical memory access information, where highly accessible data can be migrated from low-speed storage (SSD/HHD) to high-speed memory (Memory/CPU Cache) in advance, thereby reducing data access latency and further improving overall performance. For this goal, we design a locally weighted linear regression prefetch scheme to cope with irregular access patterns in large graph processing applications for a DARM-PCM hybrid memory structure. By analyzing the testing result, the appropriate structural parameters can be selected, which greatly improves the cache prefetching performance, resulting in overall performance improvement.

A Register-Based Caching Technique for the Advanced Performance of Multithreaded Models (다중스레드 모델의 성능 향상을 위한 가용 레지스터 기반 캐슁 기법)

  • Go, Hun-Jun;Gwon, Yeong-Pil;Yu, Won-Hui
    • The KIPS Transactions:PartA
    • /
    • v.8A no.2
    • /
    • pp.107-116
    • /
    • 2001
  • A multithreaded model is a hybrid one which combines locality of execution of the von Neumann model with asynchronous data availability and implicit parallelism of the dataflow model. Much researches that have been made toward the advanced performance of multithreaded models are about the cache memory which have been proved to be efficient in the von Neumann model. To use an instruction cache or operand cache, the multithreaded models must have cache memories. If cache memories are added to the multithreaded model, they may have the disadvantage of high implementation cost in the mode. To solve these problems, we did not add cache memory but applied the method of executing the caching by using available registers of the multithreaded models. The available register-based caching method is one that use the registers which are not used on the execution of threads. It may accomplish the same effect as the cache memory. The multithreaded models can compute the number of available registers to be used during the process of the register optimization, and therefore this method can be easily applied on the models. By applying this method, we can also remove the access conflict and the bottleneck of frame memories. When we applied the proposed available register-based caching method, we found that there was an improved performance of the multithreaded model. Also, when the available-register-based caching method is compared with the cache based caching method, we found that there was the almost same execution overhead.

  • PDF

A Neighbor Prefetching Scheme for a Hybrid Storage System (SSD 캐시를 위한 이웃 프리페칭 기법)

  • Baek, Sung Hoon
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.40-52
    • /
    • 2018
  • Solid state drive (SSD) cache technologies that are used as a second-tier cache between the main memory and hard disk drive (HDD) have been widely studied. The SSD cache requires a new prefetching scheme as well as cache replacement algorithms. This paper presents a prefetching scheme for a storage-class cache using SSD. This prefetching scheme is designed for the storage-class cache and based on a long-term scheduling in contrast to the short-term prefetching in the main memory. Traditional prefetching algorithms just consider only read, but the presented prefetching scheme considers both read and write. An experimental evaluation shows 2.3% to 17.8% of hit rate with a 64GB of SSD and the 4GiB of prefetching size using an I/O trace of 14 days. The proposed prefetching scheme showed significant improvement of cache hit rate and can be easily implemented in storage-class cache systems.

Improving Reliability and Security in IEEE 802.15.4 Wireless Sensor Networks (IEEE 802.15.4 센서 네트워크에서의 신뢰성 및 보안성 향상 기법)

  • Shon, Tae-Shik;Park, Yong-Suk
    • The KIPS Transactions:PartC
    • /
    • v.16C no.3
    • /
    • pp.407-416
    • /
    • 2009
  • Recently, various application services in wireless sensor networks are more considered than before, and thus reliable and secure communication of sensor network is turning out as one of essential issues. This paper studies such communication in IEEE 802.15.4 based sensor network. We present IMHRS (IEEE 802.15.4 MAC-based Hybrid hop-by-hop Reliability Scheme) employing EHHR (Enhanced Hop-by-Hop Reliability), which uses Hop-cache and Hop-ack and ALC (Adaptive Link Control), which considers link status and packet type. Also, by selecting security suite depending on network and application type, energy efficiency is considered based on HAS (Hybrid Adaptive Security) Framework. The presented schemes are evaluated by simulations and experiments. Besides, the prototype system is developed and tested to show the potential efficiency.

Hybrid Value Predictor in Wide-Issue Superscalar Processor (슈퍼스칼라 프로세서에서 명령 윈도우 크기에 따른 혼합형 값 예측기)

  • Jeon, Byoung-Chan;Choi, Gyoo-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.2
    • /
    • pp.97-103
    • /
    • 2009
  • In this paper, the performance of a hybrid value predictor according to the instruction fetch rate on window size superscalar processors is evaluated. In general, the data dependency relations of instructions are increased with the number of the fetched instructions. Therefore, it is expected that the performance of a value predictor will be higher when the instruction fetch rate is increased. The performance is studied for the machine with collapsing buffer and he one with trace cache as an instruction fetch mechanism. As a result of experiment, it is showed that the performance effect of a value predictor is higher as the instruction fetch rate of instruction window size, IPC, predict rate on apply with non-tc and tc is increased.

  • PDF

Embedded Node Cache Management for Hybrid Storage Systems (하이브리드 저장 시스템을 위한 내장형 노드 캐시 관리)

  • Byun, Si-Woo;Hur, Moon-Haeng;Roh, Chang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.157-159
    • /
    • 2007
  • The conventional hard disk has been the dominant database storage system for over 25 years. Recently, hybrid systems which incorporate the advantages of flash memory into the conventional hard disks are considered to be the next dominant storage systems to support databases for desktops and server computers. Their features are satisfying the requirements like enhanced data I/O, energy consumption and reduced boot time, and they are sufficient to hybrid storage systems as major database storages. However, we need to improve traditional index node management schemes based on B-Tree due to the relatively slow characteristics of hard disk operations, as compared to flash memory. In order to achieve this goal, we propose a new index node management scheme called FNC-Tree. FNC-Tree-based index node management enhanced search and update performance by caching data objects in unused free area of flash leaf nodes to reduce slow hard disk I/Os in index access processes.

  • PDF

Hybrid Multicast and Segment-Based Caching for VoD Services in LTE Networks

  • Choi, Kwangjin;Choi, Seong Gon;Choi, Jun Kyun
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.685-695
    • /
    • 2015
  • This paper proposes a novel video delivery scheme that reduces the bandwidth consumption cost from a video server to terminals in Long-Term Evolution networks. This proposed scheme combines optimized hybrid multicast with a segment-based caching strategy for use in environments where the maximum number of multicast channels is limited. The optimized hybrid multicast, allocation of multicast channels, and cache allocation are determined on the basis of a video's request rate, the related video's length, and the variable cost per unit size of a segment belonging to the related video. Performance evaluation results show that the proposed scheme reduces a video's delivery costs. This work is applicable to on-demand TV services that feature asynchronous video content requests.

An Efficient Address Mapping Table Management Scheme for NAND Flash Memory File System Exploiting Page Address Cache (페이지 주소 캐시를 활용한 NAND 플래시 메모리 파일시스템에서의 효율적 주소 변환 테이블 관리 정책)

  • Kim, Cheong-Ghil
    • Journal of Digital Contents Society
    • /
    • v.11 no.1
    • /
    • pp.91-97
    • /
    • 2010
  • Flash memory has been used by many digital devices for data storage, exploiting the advantages of non-volatility, low power, stability, and so on, with the help of high integrity, large capacity, and low price. As the fast growing popularity of flash memory, the density of it increases so significantly that its entire address mapping table becomes too big to be stored in SRAM. This paper proposes the associated page address cache with an efficient table management scheme for hybrid flash translation layer mapping. For this purpose, all tables are integrated into a map block containing entire physical page tables. Simulation results show that the proposed scheme can save the extra memory areas and decrease the searching time with less 2.5% of miss ratio on PC workload and can decrease the write overhead by performing write operation 33% out of total writes requested.

Comparison of Message Passing Interface and Hybrid Programming Models to Solve Pressure Equation in Distributed Memory System (분산 메모리 시스템에서 압력방정식의 해법을 위한 MPI와 Hybrid 병렬 기법의 비교)

  • Jeon, Byoung Jin;Choi, Hyoung Gwon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.39 no.2
    • /
    • pp.191-197
    • /
    • 2015
  • The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.