• Title/Summary/Keyword: Memory Management Unit

Search Result 84, Processing Time 0.02 seconds

An Algorithm For Approximating The Reliability of Network with Multistate Units (다중상태 유닛들의 망 신뢰도 근사 계산을 위한 알고리즘)

  • 오대호;염준근
    • Journal of Korean Society for Quality Management
    • /
    • v.30 no.1
    • /
    • pp.162-171
    • /
    • 2002
  • A practical algorithm of generating most probable states in decreasing order of probability, given the probability of each unit\`s state, is suggested for approximating reliability(performability) evaluation of a network with multistate(multimode) units. Method of approximating network reliability for a given measure with most probable states is illustrated with a numerical example. The proposed method in this paper is compared with the previous method regarding memory requirement. Our method has some advantages for computation and achieves improvement with regard to memory requirement for a certain condition judging from the computation experiment.

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

GPU Memory Management Technique to Improve the Performance of GPGPU Task of Virtual Machines in RPC-Based GPU Virtualization Environments (RPC 기반 GPU 가상화 환경에서 가상머신의 GPGPU 작업 성능 향상을 위한 GPU 메모리 관리 기법)

  • Kang, Jihun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.123-136
    • /
    • 2021
  • RPC (Remote Procedure Call)-based Graphics Processing Unit (GPU) virtualization technology is one of the technologies for sharing GPUs with multiple user virtual machines. However, in a cloud environment, unlike CPU or memory, general GPUs do not provide a resource isolation technology that can limit the resource usage of virtual machines. In particular, in an RPC-based virtualization environment, since GPU tasks executed in each virtual machine are performed in the form of multi-process, the lack of resource isolation technology causes performance degradation due to resource competition. In addition, the GPU memory competition accelerates the performance degradation as the resource demand of the virtual machines increases, and the fairness decreases because it cannot guarantee equal performance between virtual machines. This paper, in the RPC-based GPU virtualization environment, analyzes the performance degradation problem caused by resource contention when the GPU memory requirement of virtual machines exceeds the available GPU memory capacity and proposes a GPU memory management technique to solve this problem. Also, experiments show that the GPU memory management technique proposed in this paper can improve the performance of GPGPU tasks.

The Conceptual Design of Mass Memory Unit for High Speed Data Processing in the STSAT-3 (고속 데이터 처리를 위한 과학기술위성 3호 대용량 메모리 유닛의 개념 설계)

  • Seo, In-Ho;Oh, Dae-Soo;Myung, Noh-Hoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.4
    • /
    • pp.389-394
    • /
    • 2010
  • This paper describes the conceptual design of mass memory unit for high speed data processing and mass memory management in the STSAT-3 compared to that of STSAT-2. The FPGA directly controls the data receiving from two payloads with the maximum 100Mbps speed and 32Gb mass memory management to satisfy these requirements. We used SRAM-based FPGA from XILINX having fast operating speed and large logic cells. Therefore, the Triple Modular Redundancy(TMR) and configuration memory scrubbing techniques will also be used to protect FPGA from Single Event Upset(SEU) in space.

(PMU (Performance Monitoring Unit)-Based Dynamic XIP(eXecute In Place) Technique for Embedded Systems) (내장형 시스템을 위한 PMU (Performance Monitoring Unit) 기반 동적 XIP (eXecute In Place) 기법)

  • Kim, Dohun;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.158-166
    • /
    • 2008
  • These days, mobile embedded systems adopt flash memory capable of XIP feature since they can reduce memory usage, power consumption, and software load time. XIP provides direct access to ROM and flash memory for processors. However, using XIP incurs unnecessary degradation of applications' performance because direct access to ROM and flash memory shows more delay than that to main memory. In this paper, we propose a memory management framework, dynamic XIP, which can resolve the performance degradation of using XIP. Using a constrained RAM cache, dynamic XIP can dynamically change XIP region according to page access pattern to reduce performance degradation in execution time or energy consumption resulting from native XIP problem. The proposed framework consists of a page profiler gathering applications' memory access pattern using PMU and an XIP manager deciding that a page is accessed whether in main memory or in flash memory. The proposed framework is implemented and evaluated in Linux kernel. Our evaluation shows that our framework can reduce execution time at most 25% and energy consumption at most 22% compared with using XIP-only case adopted in general mobile embedded systems. Moreover, the evaluation shows that in execution time and energy consumption, our modified LRU algorithm with code page filters can reduce more than at most 90% and 80% respectively compared with applying just existing LRU algorithm to dynamic XIP.

  • PDF

Adaptive Memory Management Method based on Utilization Ratio to Process Continuous Query (연속질의의 처리를 위한 이용률 기반의 적응적 메모리 관리 기법)

  • Baek, Sung-Ha;Lee, Dong-Wook;Eo, Sang-Hun;Chung, Weon-Il;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.79-88
    • /
    • 2009
  • The volume of memory to store real-time data stream is varied dynamically. Continuous queries processing the data stream must manage the storage volume dynamically. In previous research, according to current volume of data a general memory manager which allocates and releases memory by a page unit is researched.However, the method frequently executes page allocation and release to store data stream. Moreover, particularly delayed queries can monopolize many of pages because the method directly allocates pages when a query has not enough memory. Focusing on the problems in memory management systems, this research proposes a memory management method which reduces the frequency of allocation and release and uniformly distributes pages for queries. The method can reduce the frequency of allocation and release through allocation based on utilization ratio of pages in each query and prevent memory monopoly through memory allocation which considers query delay.

  • PDF

Efficient FTL Mapping Management for Multiple Sector Size-based Storage Systems with NAND Flash Memory (다중 섹터 사이즈를 지원하는 낸드 플래시 메모리 기반의 저장장치를 위한 효율적인 FTL 매핑 관리 기법)

  • Lim, Seung-Ho;Choi, Min
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.12
    • /
    • pp.1199-1203
    • /
    • 2010
  • Data transfer between host system and storage device is based on the data unit called sector, which can be varied depending on computer systems. If NAND flash memory is used as a storage device, the variant sector size can affect storage system performance since its operation is much related to sector size and page size. In this paper, we propose an efficient FTL mapping management scheme to support multiple sector size within one NAND flash memory based storage device, and analyze the performance effect and management overhead. According to the proposed scheme, the management overhead of proposed FTL management is lower than conventional scheme when various sector sizes are configured in computer systems, while performance is less degraded in comparison with single sector size support system.

Real time Storage Manager to store very large datausing block transaction (블록 단위 트랜잭션을 이용한 대용량 데이터의 실시간 저장관리기)

  • Baek, Sung-Ha;Lee, Dong-Wook;Eo, Sang-Hun;Chung, Warn-Ill;Kim, Gyoung-Bae;Oh, Young-Hwan;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.1-12
    • /
    • 2008
  • Automatic semiconductor manufacture system generating transaction from 50,000 to 500,000 per a second needs storage management system processing very large data at once. A lot of storage management systems are researched for storing very large data. Existing storage management system is typical DBMS on a disk. It is difficult that the DBMS on a disk processes the 500,000 number of insert transaction per a second. So, the DBMS on main memory appeared to use memory. But it is difficultthat very large data stores into the DBMS on a memory because of limited amount of memory. In this paper we propose storage management system using insert transaction of a block unit that can process insert transaction over 50,000 and store data on low storage cost. A transaction of a block unit can decrease cost for a log and index per each tuple as transforming a transaction of a tuple unit to a block unit. Besides, the proposed system come cost to decompress all block of data because the information of each field be loss. To solve the problems, the proposed system generates the index of each compressed block to prevent reducing speed for searching. The proposed system can store very large data generated in semiconductor system and reduce storage cost.

  • PDF

An Efficient Index Buffer Management Scheme for a B+ tree on Flash Memory (플래시 메모리상에 B+트리를 위한 효율적인 색인 버퍼 관리 정책)

  • Lee, Hyun-Seob;Joo, Young-Do;Lee, Dong-Ho
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.719-726
    • /
    • 2007
  • Recently, NAND flash memory has been used for a storage device in various mobile computing devices such as MP3 players, mobile phones and laptops because of its shock-resistant, low-power consumption, and none-volatile properties. However, due to the very distinct characteristics of flash memory, disk based systems and applications may result in severe performance degradation when directly adopting them on flash memory storage systems. Especially, when a B-tree is constructed, intensive overwrite operations may be caused by record inserting, deleting, and its reorganizing, This could result in severe performance degradation on NAND flash memory. In this paper, we propose an efficient buffer management scheme, called IBSF, which eliminates redundant index units in the index buffer and then delays the time that the index buffer is filled up. Consequently, IBSF significantly reduces the number of write operations to a flash memory when constructing a B-tree. We also show that IBSF yields a better performance on a flash memory by comparing it to the related technique called BFTL through various experiments.