• Title/Summary/Keyword: hybrid in-memory

Search Result 252, Processing Time 0.039 seconds

Way-set Associative Management for Low Power Hybrid L2 Cache Memory (고성능 저전력 하이브리드 L2 캐시 메모리를 위한 연관사상 집합 관리)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.3
    • /
    • pp.125-131
    • /
    • 2018
  • STT-RAM is attracting as a next generation Non-volatile memory for replacing cache memory with low leakage energy, high integration and memory access performance similar to SRAM. However, there is problem of write operations as the other Non_volatile memory. Hybrid cache memory using SRAM and STT-RAM is attracting attention as a cache memory structure with lowe power consumption. Despite this, reducing the leakage energy consumption by the STT-RAM is still lacking access to the Dynamic energy. In this paper, we proposed as energy management method such as a way-selection approach for hybrid L2 cache fo SRAM and STT-RAM and memory selection method of write/read operation. According to the simulation results, the proposed hybrid cache memory reduced the average energy consumption by 40% on SPEC CPU 2006, compared with SRAM cache memory.

Active Page Replacement Policy for DRAM & PCM Hybrid Memory System (DRAM&PCM 하이브리드 메모리 시스템을 위한 능동적 페이지 교체 정책)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.5
    • /
    • pp.261-268
    • /
    • 2018
  • Phase Change Memory(PCM) with low power consumption and high integration attracts attention as a next generation nonvolatile memory replacing DRAM. However, there is a problem that PCM has long latency and high energy consumption due to the writing operation. The PCM & DRAM hybrid memory structure is a fruitful structure that can overcome the disadvantages of such PCM. However, the page replacement algorithm is important, because these structures use two memory of different characteristics. The purpose of this document is to effectively manage pages that can be referenced in memory, taking into account the characteristics of DRAM and PCM. In order to manage these pages, this paper proposes an page replacement algorithm based on frequently accessed and recently paged. According to our simulation, the proposed algorithm for the DRAM&PCM hybrid can reduce the energy-delay product by around 10%, compared with Clock-DWF and CLOCK-HM.

Efficient Hybrid Transactional Memory Scheme using Near-optimal Retry Computation and Sophisticated Memory Management in Multi-core Environment

  • Jang, Yeon-Woo;Kang, Moon-Hwan;Chang, Jae-Woo
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.499-509
    • /
    • 2018
  • Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multi-core environment. In this paper, we propose an efficient hybrid transactional memory scheme using near-optimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.

Hybrid in-memory storage for cloud infrastructure

  • Kim, Dae Won;Kim, Sun Wook;Oh, Soo Cheol
    • Journal of Internet Computing and Services
    • /
    • v.22 no.5
    • /
    • pp.57-67
    • /
    • 2021
  • Modern cloud computing is rapidly changing from traditional hypervisor-based virtual machines to container-based cloud-native environments. Due to limitations in I/O performance required for both virtual machines and containers, the use of high-speed storage (SSD, NVMe, etc.) is increasing, and in-memory computing using main memory is also emerging. Running a virtual environment on main memory gives better performance compared to other storage arrays. However, RAM used as main memory is expensive and due to its volatile characteristics, data is lost when the system goes down. Therefore, additional work is required to run the virtual environment in main memory. In this paper, we propose a hybrid in-memory storage that combines a block storage such as a high-speed SSD with main memory to safely operate virtual machines and containers on main memory. In addition, the proposed storage showed 6 times faster write speed and 42 times faster read operation compared to regular disks for virtual machines, and showed the average 12% improvement of container's performance tests.

High Performance PCM&DRAM Hybrid Memory System (고성능 PCM&DRAM 하이브리드 메모리 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.117-123
    • /
    • 2016
  • In general, PCM (Phase Change Memory) is unsuitable as a main memory because it has limitations: high read/write latency and low endurance. However, the DRAM&PCM hybrid memory with the same level is one of the effective structures for a next generation main memory because it can utilize an advantage of both DRAM and PCM. Therefore, it needs an effective page management method for exploiting each memory characteristics dynamically and adaptively. So we aim reducing an access time and write count of PCM by using an effective page replacement. According to our simulation, the proposed algorithm for the DRAM&PCM hybrid can reduce the PCM access count by around 60% and the PCM write count by 42% given the same PCM size, compared with Clock-DWF algorithm.

Synthesis and application of Pt and hybrid Pt-$SiO_2$ nanoparticles and control of particles layer thickness (Pt 나노입자와 Hybrid Pt-$SiO_2$ 나노입자의 합성과 활용 및 입자박막 제어)

  • Choi, Byung-Sang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.4 no.4
    • /
    • pp.301-305
    • /
    • 2009
  • Pt nanoparticles with a narrow size distribution (dia. ~4 nm) were synthesized via an alcohol reduction method and used for the fabrication of hybrid Pt-$SiO_2$ nanoparticles. Also, the self-assembled monolayer of Pt nanoparticles (NPs) was studied as a charge trapping layer for non-volatile memory (NVM) applications. A metal-oxide-semiconductor (MOS) type memory device with Pt NPs exhibits a relatively large memory window. These results indicate that the self-assembled Pt NPs can be utilized for NVM devices. In addition, it was tried to show the control of thin-film thickness of hybrid Pt-$SiO_2$ nanoparticles indicating the possibility of much applications for the MOS type memory devices.

  • PDF

Page Replacement Algorithm for Improving Performance of Hybrid Main Memory (하이브리드 메인 메모리의 성능 향상을 위한 페이지 교체 기법)

  • Lee, Minhoe;Kang, Dong Hyun;Kim, Junghoon;Eom, Young Ik
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.88-93
    • /
    • 2015
  • In modern computer systems, DRAM is commonly used as main memory due to its low read/write latency and high endurance. However, DRAM is volatile memory that requires periodic power supply (i.e., memory refresh) to sustain the data stored in it. On the other hand, PCM is a promising candidate for replacement of DRAM because it is non-volatile memory, which could sustain the stored data without memory refresh. PCM is also available for byte-addressable access and in-place update. However, PCM is unsuitable for using main memory of a computer system because it has two limitations: high read/write latency and low endurance. To take the advantage of both DRAM and PCM, a hybrid main memory, which consists of DRAM and PCM, has been suggested and actively studied. In this paper, we propose a novel page replacement algorithm for hybrid main memory. To cope with the weaknesses of PCM, our scheme focuses on reducing the number of PCM writes in the hybrid main memory. Experimental results shows that our proposed page replacement algorithm reduces the number of PCM writes by up to 80.5% compared with the other page replacement algorithms.

Bandwidth-aware Memory Placement on Hybrid Memories targeting High Performance Computing Systems

  • Lee, Jongmin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.1-8
    • /
    • 2019
  • Modern computers provide tremendous computing capability and a large memory system. Hybrid memories consist of next generation memory devices and are adopted in high performance systems. However, the increased complexity of the microprocessor makes it difficult to operate the system effectively. In this paper, we propose a simple data migration method called Bandwidth-aware Data Migration (BDM) to efficiently use memory systems for high performance processors with hybrid memory. BDM monitors the status of applications running on the system using hardware performance monitoring tools and migrates the appropriate pages of selected applications to High Bandwidth Memory (HBM). BDM selects applications whose bandwidth usages are high and also evenly distributed among the threads. Experimental results show that BDM improves execution time by an average of 20% over baseline execution.

Cost-effective multistage interconnection network for UNMA model system (NUMA(non-uniform memory access) 모델 시스템을 위한 cost-effective한 다단계 상호연결망)

  • 최창훈;김성천
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.5
    • /
    • pp.19-32
    • /
    • 1997
  • So far, the multiple path MINs to provide redundant paths in the traditional UPP MINs have been realized by adding additional hardware such as extra stages, duplicated data links, or multiple copies of sthe MIN. And the traditional MINs do not exploit locality: communication with all processor-memory paris takes the same amount of time. Also so far there has been little progress for exploiting locality of reference in MINs. In this paper, we present a new topology MIN, hybrid MIN that is constructed with 2N-3 SEs which is far fewer SEs than that of traditional MINs. Although the hybrid MIN is constructed with 2N-3 SEs, the hybrid MIN satisfies full access capability (FAC) and has redundant paths(but providing single path for 2 memory modules of each processor). Moreover the has redundant paths (but providing single path for 2 memory modules of each processor). Moreover the Hybrid MIN provides shortcut path between pairs which have frequent dat acommunication (locality of reference). Its performance under varing degrees of localized communication is analyzed.

  • PDF

A Page Placement Scheme of Smartphone Memory with Hybrid Memory (이기종 메모리로 구성된 스마트폰 메모리의 페이지 배치 기법)

  • Lee, Soyoon;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.149-153
    • /
    • 2020
  • This paper presents a new page placement policy for DRAM/NVRAM hybrid main memory in smartphones. Unlike previous studies on hybrid memory systems, this paper performs the placement of pages based on the offline analysis of memory access behaviors as smartphone's memory accesses are skewed to a certain address ranges, which is consistent regardless of smartphone applications, specially for write operations. Thus, we aim at reducing the write traffic to NVRAM by the offline analysis results. Experimental results show that the proposed policy reduces the write traffic to NVRAM by 61% on average without performance degradations.