• Title/Summary/Keyword: Main-memory

Search Result 762, Processing Time 0.026 seconds

A Survey of the Index Schemes based on Flash Memory (NAND 플래쉬메모리 기반 색인에 관한 연구)

  • Kim, Dong-Hyun;Ban, Chae-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.10
    • /
    • pp.1529-1534
    • /
    • 2013
  • Since a NAND-flash memory is able to store mass data in a small sized chip and consumes low power, it is exploited on various hand-held devices, such as a smart phone and a sensor node, etc. To process efficiently mass data stored in the flash memory, it is required to use an index. However, since the write operation of the flash memory is slower than the read operation and an overwrite operation is not supported, the usage of existing index schemes degrades the performance of the index. In this paper, we survey the previous researches of index schemes for the flash memory and classify the researches by the methods to solve problems. We also present the performance factor to be considered when we design the index scheme on the flash memory.

Design of Virtual Memory Compression System on the Embedded System (임베디드 시스템에서 가상 메모리 압축 시스템 설계)

  • Jeong, Jin-Woo;Jang, Seung-Ju
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.405-412
    • /
    • 2002
  • The embedded system has less fast CPU and lower memory than PC(personal Computer) or Workstation system. Therefore embedded operating is system is designed to efficiently use the limited resource in the system. Virtual memory management or the embedded linux have a low efficiency when page fault is occurred to get a data from I/O device. Because a data is moving from the swap device to main memory. This paper suggests virtual memory compression algorithm for improving in virtual memory management and capacity of space. In this paper, we present a way to performance implement a virtual memory compression system that achieves significant improvement for the embedded system.

A Disk Group Commit Protocol for Main-Memory Database Systems (주기억 장치 데이타베이스 시스템을 위한 디스크 그룹 완료 프로토콜)

  • 이인선;염헌영
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.516-526
    • /
    • 2004
  • Main-Memory DataBase(MMDB) system where all the data reside on the main memory shows tremendous performance boost since it does not need any disk access during the transaction processing. Since MMDB still needs disk logging for transaction commit, it has become another bottleneck for the transaction throughput and the commit protocol should be examined carefully. There have been several attempts to reduce the logging overhead. The pre-commit and group commit are two well known techniques which do not require additional hardware. However, there has not been any research to analyze their effect on MMDB system. In this paper, we identify the possibility of deadlock resulting from the group commit and propose the disk group commit protocol which can be readily deployed. Using extensive simulation, we have shown that the group commit is effective on improving the MMDB transaction performance and the proposed disk group commit almost always outperform carefully tuned group commit. Also, we note that the pre-commit does not have any effect when used alone but shows some improvement if used in conjunction with the group commit.

Design and Implementation of a Concuuuency Control Manager for Main Memory Databases (주기억장치 데이터베이스를 위한 동시성 제어 관리자의 설계 및 구현)

  • Kim, Sang-Wook;Jang, Yeon-Jeong;Kim, Yun-Ho;Kim, Jin-Ho;Lee, Seung-Sun;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.646-680
    • /
    • 2000
  • In this paper, we discuss the design and implementation of a concurrency control manager for a main memory DBMS(MMDBMS). Since an MMDBMS, unlike a disk-based DBMS, performs all of data update or retrieval operations by accessing main memory only, the portion of the cost for concurrency control in the total cost for a data update or retrieval is fairly high. Thus, the development of an efficient concurrency control manager highly accelerates the performance of the entire system. Our concurrency control manager employs the 2-phase locking protocol, and has the following characteristics. First, it adapts the partition, an allocation unit of main memory, as a locking granule, and thus, effectively adjusts the trade-off between the system concurrency and locking cost through the analysis of applications. Second, it enjoys low locking costs by maintaining the lock information directly in the partition itself. Third, it provides the latch as a mechanism for physical consistency of system data. Our latch supports both of the shared and exclusive modes, and maximizes the CPU utilization by combining the Bakery algorithm and Unix semaphore facility. Fourth, for solving the deadlock problem, it periodically examines whether a system is in a deadlock state using lock waiting information. In addition, we discuss various issues arising in development such as mutual exclusion of a transaction table, mutual exclusion of indexes and system catalogs, and realtime application supports.

  • PDF

CC-GiST: A Generalized Framework for Efficiently Implementing Arbitrary Cache-Conscious Search Trees (CC-GiST: 임의의 캐시 인식 검색 트리를 효율적으로 구현하기 위한 일반화된 프레임워크)

  • Loh, Woong-Kee;Kim, Won-Sik;Han, Wook-Shin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.1 s.111
    • /
    • pp.21-34
    • /
    • 2007
  • According to recent rapid price drop and capacity growth of main memory, the number of applications on main memory databases is dramatically increasing. Cache miss, which means a phenomenon that the data required by CPU is not resident in cache and is accessed from main memory, is one of the major causes of performance degradation of main memory databases. Several cache-conscious trees have been proposed for reducing cache miss and making the most use of cache in main memory databases. Since each cache-conscious tree has its own unique features, more than one cache-conscious tree can be used in a single application depending on the application's requirement. Moreover, if there is no existing cache-conscious tree that satisfies the application's requirement, we should implement a new cache-conscious tree only for the application's sake. In this paper, we propose the cache-conscious generalized search tree (CC-GiST). The CC-GiST is an extension of the disk-based generalized search tree (GiST) [HNP95] to be tache-conscious, and provides the entire common features and algorithms in the existing cache-conscious trees including pointer compression and key compression techniques. For implementing a cache-conscious tree based on the CC-GiST proposed in this paper, one should implement only a few functions specific to the cache-conscious tree. We show how to implement the most representative cache-conscious trees such as the CSB+-tree, the pkB-tree, and the CR-tree based on the CC-GiST. The CC-GiST eliminates the troublesomeness caused by managing mire than one cache-conscious tree in an application, and provides a framework for efficiently implementing arbitrary cache-conscious trees with new features.

An Efficient MBR Compression Technique for Main Memory Multi-dimensional Indexes (메인 메모리 다차원 인덱스를 위한 효율적인 MBR 압축 기법)

  • Kim, Joung-Joon;Kang, Hong-Koo;Kim, Dong-Oh;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.2
    • /
    • pp.13-23
    • /
    • 2007
  • Recently there is growing Interest in LBS(Location Based Service) requiring real-time services and the spatial main memory DBMS for efficient Telematics services. In order to optimize existing disk-based multi-dimensional Indexes of the spatial main memory DBMS in the main memory, multi-dimensional index structures have been proposed, which minimize failures in cache access by reducing the entry size. However, because the reduction of entry size requires compression based on the MBR of the parent node or the removal of redundant MBR, the cost of MBR reconstruction increases in index update and the efficiency of search is lowered in index search. Thus, to reduce the cost of MBR reconstruction, this paper proposed the RSMBR(Relative-Sized MBR) compression technique, which applies the base point of compression differently in case of broad distribution and narrow distribution. In case of broad distribution, compression is made based on the left-bottom point of the extended MBR of the parent node, and in case of narrow distribution, the whole MBR is divided into cells of the same size and compression is made based on the left-bottom point of each cell. In addition, MBR was compressed using a relative coordinate and size to reduce the cost of search in index search. Lastly, we evaluated the performance of the proposed RSMBR compression technique using real data, and proved its superiority.

  • PDF

Performance improvement study for MRP part explosion in ERP environment (ERP 환경에서 MRP 부품전개의 성능향상을 위한 연구)

  • Lee H.G.;Na H.B.;Park J.W.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.187-190
    • /
    • 2005
  • There have been many studies to improve the performance of a database system focused on modifying data structure, data partitioning, and materializing strategy. The main contribution of this study is to propose a new alternative towards improving database performance by designing single table schema or processing queries virtually in main memory space. Material Requirement Planning(MRP) part explosion process has shown almost 2 times shorter under DB schema we suggested, and even more than 10 times shorter when separating and filtering policy of DB archiving process are assumed. Several experimental results are shown to illustrate the excellence of our solution.

  • PDF

Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

  • SAINT-DIZIER, Patrick
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.5 no.2
    • /
    • pp.75-101
    • /
    • 2015
  • In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL) principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

A Shared Library as an Active Memory Object for Application Software Development of Large Scale Real-time Systems (대형 실시간 시스템의 응용 소프트웨어 개발을 위한 능동적 메로리 개체로서의 공유 라이브러리)

  • 정부금;차영준김형환임동선
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.233-236
    • /
    • 1998
  • In this paper, we present a novel approach named a shared library as an active memory object for application software development of large-scale real-time systems. Unlike the general passive shared memory, shared library proposed in this paper can be activated as an execution object. Moreover this is not tightly coupled with application programs unlike the normal libraries. To implement this mechanism, operating system makes the shared memory as an active object and shared library realizes the indirect call structure. This mechanism enhanced the utilization of main memory and communication performance. And this is successfully applied to the HANbit ACE ATM switching system and the TDX-10 switching system.

  • PDF

Divided Disk Cache and SSD FTL for Improving Performance in Storage

  • Park, Jung Kyu;Lee, Jun-yong;Noh, Sam H.
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.1
    • /
    • pp.15-22
    • /
    • 2017
  • Although there are many efficient techniques to minimize the speed gap between processor and the memory, it remains a bottleneck for various commercial implementations. Since secondary memory technologies are much slower than main memory, it is challenging to match memory speed to the processor. Usually, hard disk drives include semiconductor caches to improve their performance. A hit in the disk cache eliminates the mechanical seek time and rotational latency. To further improve performance a divided disk cache, subdivided between metadata and data, has been proposed previously. We propose a new algorithm to apply the SSD that is flash memory-based solid state drive by applying FTL. First, this paper evaluates the performance of such a disk cache via simulations using DiskSim. Then, we perform an experiment to evaluate the performance of the proposed algorithm.