• Title/Summary/Keyword: page-fault overhead

Search Result 7, Processing Time 0.058 seconds

Efficient Management of PCM-based Swap Systems with a Small Page Size

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.476-484
    • /
    • 2015
  • Due to the recent advances in non-volatile memory technologies such as PCM, a new memory hierarchy of computer systems is expected to appear. In this paper, we explore the performance of PCM-based swap systems and discuss how this system can be managed efficiently. Specifically, we introduce three management techniques. First, we show that the page fault handling time can be reduced by attaching PCM on DIMM slots, thereby eliminating the software stack overhead of block I/O and the context switch time. Second, we show that it is effective to reduce the page size and turn off the read-ahead option under the PCM swap system where the page fault handling time is sufficiently small. Third, we show that the performance is not degraded even with a small DRAM memory under a PCM swap device; this leads to the reduction of DRAM's energy consumption significantly compared to HDD-based swap systems. We expect that the result of this paper will lead to the transition of the legacy swap system structure of "large memory - slow swap" to a new paradigm of "small memory - fast swap."

Efficient Process Checkpointing through Fine-Grained COW Management in New Memory based Systems (뉴메모리 기반 시스템에서 세밀한 COW 관리 기법을 통한 효율적 프로세스 체크포인팅 기법)

  • Park, Jay H.;Moon, Young Je;Noh, Sam H.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.132-138
    • /
    • 2017
  • We design and implement a process-based fault recovery system to increase the reliability of new memory based computer systems. A rollback point is made at every context switch to which a process can rollback to upon a fault. In this study, a clone process of the original process, which we refer to as a P-process (Persistent-process), is created as a rollback point. Such a design minimizes losses when a fault does occur. Specifically, first, execution loss can be minimized as rollback points are created only at context switches, which bounds the lost execution. Second, as we make use of the COW (Copy-On-Write)mechanism, only those parts of the process memory state that are modified (in page units) are copied decreasing the overhead for creating the P-process. Our experimental results show that the overhead is approximately 5% in 8 out of 11 PARSEC benchmark workloads when P-process is created at every context switch time. Even for workloads that result in considerable overhead, we show that this overhead can be reduced by increasing the P-process generation interval.

Analyzing the Overhead of the Memory Mapped File I/O for In-Memory File Systems (메모리 파일시스템에서 메모리 매핑을 이용한 파일 입출력의 오버헤드 분석)

  • Choi, Jungsik;Han, Hwansoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.10
    • /
    • pp.497-503
    • /
    • 2016
  • Emerging next-generation storage technologies such as non-volatile memory will help eliminate almost all of the storage latency that has plagued previous storage devices. In conventional storage systems, the latency of slow storage devices dominates access latency; hence, software efficiency is not critical. With low-latency storage, software costs can quickly dominate memory latency. Hence, researchers have proposed the memory mapped file I/O to avoid the software overhead. Mapping a file into the user memory space enables users to access the file directly. Therefore, it is possible to avoid the complicated I/O stack. This minimizes the number of user/kernel mode switchings. In addition, there is no data copy between kernel and user areas. Despite of the benefits in the memory mapped file I/O, its overhead still needs to be addressed, as the existing mechanism for the memory mapped file I/O is designed for slow block devices. In this paper, we identify the overheads of the memory mapped file I/O via experiments.

Page-level Incremental Checkpointing for Efficient Use of Stable Storage (안정 저장장치의 효율적 사용을 위한 페이지 기반 점진적 검사점 기법)

  • Heo, Jun-Young;Yi, Sang-Ho;Gu, Bon-Cheol;Cho, Yoo-Kun;Hong, Ji-Man
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.12
    • /
    • pp.610-617
    • /
    • 2007
  • Incremental checkpointing, which is intended to minimize checkpointing overhead, saves only the modified pages of a process. However, the cumulative site of incremental checkpoints increases at a steady rate over time because a number of updated values may be saved for the same page. In this paper, we present a comprehensive overview of Pickpt, a page-level incremental checkpointing facility. Pickpt provides space-efficient techniques aiming to minimizing the use of disk space. For our experiments, the results showed that the use of disk space using Pickpt was significantly reduced, compared with existing incremental checkpointing.

Taking Point Decision Mechanism of Page-level Incremental Checkpointing based on Cost Analysis of Process Execution Time (프로세스 수행 시간의 비용 분석에 기반을 둔 페이지 단위 점진적 검사점의 작성 시점 결정 기법)

  • Yi Sang-Ho;Heo Jun-Young;Hong Ji-Man
    • The KIPS Transactions:PartA
    • /
    • v.13A no.4 s.101
    • /
    • pp.289-294
    • /
    • 2006
  • Checkpointing is an effective mechanism that allows a process to resume its execution that was discontinued by a system failure without having to restart from the beginning. Especially, page-level incremental checkpointing saves only the modified pages of a process to minimize the checkpointing overhead. This means that in incremental checkpointing, the time consumed for checkpointing varies according to the amount of modified pages. Thus, the efficient interval of checkpointing must be determined on run-time of the process. In this paper, we present an efficient and adaptive page-level incremental checkpointing facility that is based on the cost analysis of process execution time. In our simulation, results show that the proposed mechanism significantly reduced the average process execution time compared with existing fixed-interval-based page-level incremental checkpointing.

Comparison of performance between MariaDB and PostgreSQL in terms of CPU overhead (CPU 오버헤드 분석을 통한 MariaDB와 PostgreSQL 성능 비교)

  • Lee, Dong-Ho;Song, Min-Chang;Cho, Young-Tae;Kim, Seung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.297-299
    • /
    • 2018
  • IT기업뿐만 아니라 다양한 기업들이 빅데이터, 인공지능, 블록체인 등 많은 양의 컴퓨터 자원 (CPU, RAM 등)을 요구하는 기술들을 서비스화 하고 있다. 따라서 한정된 차원으로 효율적인 서비스를 운영하는 것도 주요 이슈가 되고 있다. 본 논문에서는 오픈소스 RDBMS 인 MariaDB와 PostgreSQL을 프로파일링하여 CPU 자원 효율성 관점에서 비교한다. 연구 결과 인터넷 서비스 환경에서 MariaDB가 PostgreSQL보다 버퍼 풀로 인해 페이지 캐시 참조율이 낮고, page fault 수가 적어 CPU 오버헤드가 더 작다는 것을 입증하였다.

Improving the Read Performance of Compressed File Systems Considering Kernel Read-ahead Mechanism (커널의 미리읽기를 고려한 압축파일시스템의 읽기성능향상)

  • Ahn, Sung-Yong;Hyun, Seung-Hwan;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.678-682
    • /
    • 2010
  • Compressed filesystem is frequently used in the embedded system to increase cost efficiency. One of the drawbacks of compressed filesystem is low read performance. Moreover, read-ahead mechanism that improves the read throughput of storage device has negative effect on the read performance of compressed filesystem, increasing read latency. Main reason is that compressed filesystem has too big read-ahead miss penalty due to decompression overhead. To solve this problem, this paper proposes new read technique considering kernel read-ahead mechanism for compressed filesystem. Proposed technique improves read throughput of device by bulk read from device and reduces decompression overhead of compressed filesystem by selective decompression. We implement proposed technique by modifying CramFS and evaluate our implementation in the Linux kernel 2.6.21. Performance evaluation results show that proposed technique reduces the average major page fault handling latency by 28%.