• Title/Summary/Keyword: 메모리 할당 기법

Search Result 127, Processing Time 0.025 seconds

SAPA : State-Aware Page Allocation Scheme for SSD Based Virtual Memory Systems (SSD 기반의 가상메모리 시스템을 위한 상태기반 페이지 할당 기법)

  • Kim, Hyun-Wook;Ahn, Woo-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.438-441
    • /
    • 2011
  • 최근 태블릿 컴퓨터(Tablet computer) 등 첨단 모바일 기기가 대중화되고 고성능 노트북이 널리 사용되면서 SSD(Solid State Drive)를 주 저장장치로 사용하는 시스템이 증가하고 있다. 이들 시스템에서는 SSD를 가상메모리의 스왑 영역으로 사용하므로 이에 적합한 가상메모리 정책이 필요하다. SSD 제조사는 SSD 내부의 자세한 정보는 제공하지 않기 때문에 최적화된 할당에 어려움이 생긴다. 본 논문에서는 SSD의 내부 상태를 기록하고, VM의 스왑 공간으로 사용 시 각블록 상태를 고려하여 최적화된 할당 페이지를 선택하는 기법을 제안한다.

A Study of Object Pooling Scheme for Efficient Online Gaming Server (효율적인 온라인 게임 서버를 위한 객체풀링 기법에 관한 연구)

  • Kim, Hye-Young;Ham, Dae-Hyeon;Kim, Moon-Seong
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.163-170
    • /
    • 2009
  • There is a request from the client, we almost apply dynamic memory allocating method using Accept() of looping method; thus, there could be process of connecting synchronously lots of client in most of On-line gaming server engine. However, this kind of method causes on-line gaming server which need to support and process the clients, longer loading and bottle necking. Therefore we propose the object pooling scheme to minimize the memory fragmentation and the load of the initialization to the client using an AcceptEx() and static allocating method for an efficient gaming server of the On-line in this paper. We design and implement the gaming server applying to our proposed scheme. Also, we show efficiency of our proposed scheme by performance analysis in this paper.

  • PDF

Garbage Collection Technique for Non-volatile Memory by Using Tree Data Structure (트리 자료구조를 이용한 비 휘발성 메모리의 가비지 수집 기법)

  • Lee, Dokeun;Won, Youjip
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.152-162
    • /
    • 2016
  • Most traditional garbage collectors commonly use the language level metadata, which is designed for pointer type searching. However, because it is difficult to use this metadata in non-volatile memory allocation platforms, a new garbage collection technique is essential for non-volatile memory utilization. In this paper, we design new metadata for managing information regarding non-volatile memory allocation called "Allocation Tree". This metadata is comprised of tree data structure for fast information lookup and a node that holds an allocation address and an object ID pair in key-value form. The Garbage Collector starts collecting when there are insufficient non-volatile memory spaces, and it compares user data and the allocation tree for garbage detection. We develop this algorithm in a persistent heap based non-volatile memory allocation platform called "HEAPO" for demonstration.

A Dynamic Buffer Allocation Scheme in Video-on-Demand System (주문형 비디오 시스템에서의 동적 버퍼 할당 기법)

  • Lee, Sang-Ho;Moon, Yang-Sae;Whang, Kyu-Young;Cho, Wan-Sup
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.9
    • /
    • pp.442-460
    • /
    • 2001
  • In video-on-demand(VOD) systems it is important to minimize initial latency and memory requirements. The minimization of initial latency enables the system to provide services with short response time, and the minimization of memory requirements enables the system to service more concurrent user requests with the same amount of memory. In VOD systems, since initial latency and memory requirement increase according to the increment of buffer size allocated to user requests, the buffer size allocated to user requests must be minimized. The existing static buffer allocation scheme, however, determines the buffer size based on the assumption that thy system is in fully loaded state. Thus, when the system is in partially loaded state, the scheme allocates user requests unnecessarily large buffers. This paper proposes a dynamics buffer allocation scheme that allocates user requests the minimum buffer size in fully loaded state as well as a partially loaded state. This scheme dynamically determines the buffer size based on the number of user requests in service and the number of user requests arriving while servicing current requests. In addition, through analyses and simulations, this paper validates that the dynamics buffer allocation outperforms the statics buffer allocation in initial latency and the number of concurrent user requests that can be supported. Our simulation results show that, in proportion to the static buffer allocation scheme, the dynamic buffer allocation scheme reduces the average initial latency by 29%~65%, and in a systems having several disks. increases the average number of concurrent user requests by 48%~68%. Our results show that the dynamic buffer allocation scheme significantly improves the performance and reduce the capacity requirements of VOD systems.

  • PDF

Bus Splitting Techniques for MPSoC to Reduce Bus Energy (MPSoC 플랫폼의 버스 에너지 절감을 위한 버스 분할 기법)

  • Chung Chun-Mok;Kim Jin-Hyo;Kim Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.699-708
    • /
    • 2006
  • Bus splitting technique reduces bus energy by placing modules with frequent communications closely and using necessary bus segments in communications. But, previous bus splitting techniques can not be used in MPSoC platform, because it uses cache coherency protocol and all processors should be able to see the bus transactions. In this paper, we propose a bus splitting technique for MPSoC platform to reduce bus energy. The proposed technique divides a bus into several bus segments, some for private memory and others for shared memory. So, it minimizes the bus energy consumed in private memory accesses without producing cache coherency problem. We also propose a task allocation technique considering cache coherency protocol. It allocates tasks into processors according to the numbers of bus transactions and cache coherence protocol, and reduces the bus energy consumption during shared memory references. The experimental results from simulations say the bus splitting technique reduces maximal 83% of the bus energy consumption by private memory accesses. Also they show the task allocation technique reduces maximal 30% of bus energy consumed in shared memory references. We can expect the bus splitting technique and the task allocation technique can be used in multiprocessor platforms to reduce bus energy without interference with cache coherency protocol.

A Study of Purity-based Page Allocation Scheme for Flash Memory File Systems (플래시 메모리 파일 시스템을 위한 순수도 기반 페이지 할당 기법에 대한 연구)

  • Baek, Seung-Jae;Choi, Jong-Moo
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.387-398
    • /
    • 2006
  • In this paper, we propose a new page allocation scheme for flash memory file system. The proposed scheme allocates pages by exploiting the concept of Purity, which is defined as the fraction of blocks where valid Pages and invalid Pages are coexisted. The Pity determines the cost of block cleaning, that is, the portion of pages to be copied and blocks to be erased for block cleaning. To enhance the purity, the scheme classifies hot-modified data and cold-modified data and allocates them into different blocks. The hot/cold classification is based on both static properties such as attribute of data and dynamic properties such as the frequency of modifications. We have implemented the proposed scheme in YAFFS and evaluated its performance on the embedded board equipped with 400MHz XScale CPU, 64MB SDRAM, and 64MB NAND flash memory. Performance measurements have shown that the proposed scheme can reduce block cleaning time by up to 15.4 seconds with an average of 7.8 seconds compared to the typical YAFFS. Also, the enhancement becomes bigger as the utilization of flash memory increases.

A New Register Allocation Technique for Performance Enhancement of Embedded Software (내장형 소프트웨어의 성능 향상을 위한 새로운 레지스터 할당 기법)

  • Jong-Yeol, Lee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.10
    • /
    • pp.85-94
    • /
    • 2004
  • In this paper, a register allocation techlique that translates memory accesses to register accesses Is presented to enhance embedded software performance. In the proposed method, a source code is profiled to generate a memory trace. From the profiling results, target functions with high dynamic call counts are selected, and the proposed register allocation technique is applied only to the target functions to save the compilation time. The memory trace of the target functions is searched for the memory accesses that result in cycle count reduction when replaced by register accesses, and they are translated to register accesses by modifying the intermediate code and allocating Promotion registers. The experiments where the performance is measured in terms of the cycle count on MediaBench and DSPstone benchmark programs show that the proposed method increases the performance by 14% and 18% on the average for ARM and MCORE, respectively.

Efficient Page Allocation Method Considering Update Pattern in NAND Flash Memory (NAND 플래시 메모리에서 업데이트 패턴을 고려한 효율적인 페이지 할당 기법)

  • Kim, Hui-Tae;Han, Dong-Yun;Kim, Kyong-Sok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.272-284
    • /
    • 2010
  • Flash Memory differs from the hard disk, because it cannot be overwritten. Most of the flash memory file systems use not-in-place update mechanisms for the update. Flash memory file systems execute sometimes block cleaning process in order to make writable space while performing not-in-place update process. Block cleaning process collects the invalid pages and convert them into the free pages. Block cleaning process is a factor that affects directly on the performance of the flash memory. Thus this paper suggests the efficient page allocation method, which reduces block cleaning cost by minimizing the numbers of block that has valid and invalid pages at a time. The result of the simulation shows an increase in efficiency by reducing more block cleaning costs than the original YAFFS.

Fixed Size Memory Pool Management Method for Mobile Game Servers (모바일 게임 서버를 위한 고정크기 메모리 풀 관리 방법)

  • Park, Seyoung;Choi, Jongsun;Choi, Jaeyoung;Kim, Eunhoe
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.9
    • /
    • pp.327-336
    • /
    • 2015
  • Mobile game servers usually execute frequent dynamic memory allocation for generating the buffers that deal with clients requests. It causes to deteriorate the performance of game servers since it increases system workload and memory fragmentation. In this paper, we propose fixed-sized memory pool management method. Memory pool for the proposed method has a sequential memory structure based on circular linked list data structure. It solves memory fragmentation problem and saves time for searching the memory blocks which are required for memory allocation and deallocation. We showed the efficiency of the proposed method by evaluating the performance of dynamic memory allocation, through the proposed method and the memory pool management method based on boost open source library.

An Efficient Page Allocation and Garbage Collection Scheme for a NAND Flash Memory-based Multimedia File Systems (낸드 플래시 메모리 기반 멀티미디어 파일 시스템에서의 효율적인 페이지 할당 및 회수 기법)

  • Han, Jun-Yeong;Kim, Sung-Jo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06b
    • /
    • pp.289-293
    • /
    • 2007
  • 낸드 플래시 메모리는 특성상 덮어 쓰기가 불가능하기 때문에 유효하지 않는 데이터가 저장된 더티(Dirty) 상태의 페이지를 삭제 연산을 통해 클린(Clean) 상태로 만든 후 데이터를 써야 한다. 더티 페이지가 낸드 플래시 메모리에 많이 존재하면 파일을 쓸 때 많은 블록을 삭제해야 하기 때문에 쓰기 지연 시간이 길어지는 문제가 발생한다. 따라서 본 논문에서는 일정한 쓰기 지연 시간을 보장하는 새로운 페이지 할당 및 회수 기법을 제안한다. 파일이 삭제될 때 더티 상태인 페이지를 삭제함으로써 클린 상태로 변경하여 낸드 플래시 메모리에 쓰기 지연 시간을 길게 만드는 더티 페이지가 없는 상태로 유지한다. 또한 삭제 연산은 블록 단위로 수행되므로 삭제할 블록의 유효한 페이지를 다른 블록으로 복사해야 하기 때문에, 페이지를 할당할 때 한 블록에 가급적 적은 개수의 파일을 저장하는 알고리즘을 제시한다.

  • PDF