• Title/Summary/Keyword: page copy overhead

Search Result 5, Processing Time 0.021 seconds

Buffer Invalidation Schemes for High Performance Transaction Processing in Shared Database Environment (공유 데이터베이스 환경에서 고성능 트랜잭션 처리를 위한 버퍼 무효화 기법)

  • 김신희;배정미;강병욱
    • The Journal of Information Systems
    • /
    • v.6 no.1
    • /
    • pp.159-180
    • /
    • 1997
  • Database sharing system(DBSS) refers to a system for high performance transaction processing. In DBSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DBMS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be ensured so that every node can always access the latest version of pages. In this paper, we propose efficient buffer invalidation schemes in DBSS, where the database is logically partitioned using primary copy authority to reduce locking overhead. The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency. Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF

Cache Coherency Schemes for Database Sharing Systems with Primary Copy Authority (주사본 권한을 지원하는 공유 데이터베이스 시스템을 위한 캐쉬 일관성 기법)

  • Kim, Shin-Hee;Cho, Haeng-Rae;Kim, Byeong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1390-1403
    • /
    • 1998
  • Database sharing system (DSS) refers to a system for high performance transaction processing. In DSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DB'\fS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be cnsured so that every node can always access the'latest version of pages. In this paper, we propose efficient cache consistency schemes in DSS, where the database is logically partitioned using primary copy authority to reduce locking overhead, The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency, Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF

Efficient Process Checkpointing through Fine-Grained COW Management in New Memory based Systems (뉴메모리 기반 시스템에서 세밀한 COW 관리 기법을 통한 효율적 프로세스 체크포인팅 기법)

  • Park, Jay H.;Moon, Young Je;Noh, Sam H.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.132-138
    • /
    • 2017
  • We design and implement a process-based fault recovery system to increase the reliability of new memory based computer systems. A rollback point is made at every context switch to which a process can rollback to upon a fault. In this study, a clone process of the original process, which we refer to as a P-process (Persistent-process), is created as a rollback point. Such a design minimizes losses when a fault does occur. Specifically, first, execution loss can be minimized as rollback points are created only at context switches, which bounds the lost execution. Second, as we make use of the COW (Copy-On-Write)mechanism, only those parts of the process memory state that are modified (in page units) are copied decreasing the overhead for creating the P-process. Our experimental results show that the overhead is approximately 5% in 8 out of 11 PARSEC benchmark workloads when P-process is created at every context switch time. Even for workloads that result in considerable overhead, we show that this overhead can be reduced by increasing the P-process generation interval.

Performance Comparison between Hardware & Software Cache Partitioning Techniques (하드웨어 캐시 파티셔닝과 소프트웨어 캐시 파티셔닝의 성능 비교)

  • Park, JiWoong;Yeom, HeonYoung;Eom, Hyeonsang
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.177-182
    • /
    • 2015
  • The era of multi-core processors has begun since the limit of the clock speed has been reached. These days, multi-core technology is used not only in desktops, servers, and table PCs, but also in smartphones. In this architecture, there is always interference between processes, because of the sharing of system resources. To address this problem, cache partitioning is used, which can be roughly divided into two types: software and hardware cache partitioning. When it comes to dynamic cache partitioning, hardware cache partitioning is superior to software cache partitioning, because it needs no page copy. In this paper, we compare the effectiveness of hardware and software cache partitioning on the AMD Opteron 6282 SE, which is the only commodity processor providing hardware cache partitioning, to see whether this technique can be effectively deployed in dynamic environments.

Analyzing the Overhead of the Memory Mapped File I/O for In-Memory File Systems (메모리 파일시스템에서 메모리 매핑을 이용한 파일 입출력의 오버헤드 분석)

  • Choi, Jungsik;Han, Hwansoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.10
    • /
    • pp.497-503
    • /
    • 2016
  • Emerging next-generation storage technologies such as non-volatile memory will help eliminate almost all of the storage latency that has plagued previous storage devices. In conventional storage systems, the latency of slow storage devices dominates access latency; hence, software efficiency is not critical. With low-latency storage, software costs can quickly dominate memory latency. Hence, researchers have proposed the memory mapped file I/O to avoid the software overhead. Mapping a file into the user memory space enables users to access the file directly. Therefore, it is possible to avoid the complicated I/O stack. This minimizes the number of user/kernel mode switchings. In addition, there is no data copy between kernel and user areas. Despite of the benefits in the memory mapped file I/O, its overhead still needs to be addressed, as the existing mechanism for the memory mapped file I/O is designed for slow block devices. In this paper, we identify the overheads of the memory mapped file I/O via experiments.