• Title/Summary/Keyword: Block unit transaction

Search Result 5, Processing Time 0.025 seconds

Real time Storage Manager to store very large datausing block transaction (블록 단위 트랜잭션을 이용한 대용량 데이터의 실시간 저장관리기)

  • Baek, Sung-Ha;Lee, Dong-Wook;Eo, Sang-Hun;Chung, Warn-Ill;Kim, Gyoung-Bae;Oh, Young-Hwan;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.1-12
    • /
    • 2008
  • Automatic semiconductor manufacture system generating transaction from 50,000 to 500,000 per a second needs storage management system processing very large data at once. A lot of storage management systems are researched for storing very large data. Existing storage management system is typical DBMS on a disk. It is difficult that the DBMS on a disk processes the 500,000 number of insert transaction per a second. So, the DBMS on main memory appeared to use memory. But it is difficultthat very large data stores into the DBMS on a memory because of limited amount of memory. In this paper we propose storage management system using insert transaction of a block unit that can process insert transaction over 50,000 and store data on low storage cost. A transaction of a block unit can decrease cost for a log and index per each tuple as transforming a transaction of a tuple unit to a block unit. Besides, the proposed system come cost to decompress all block of data because the information of each field be loss. To solve the problems, the proposed system generates the index of each compressed block to prevent reducing speed for searching. The proposed system can store very large data generated in semiconductor system and reduce storage cost.

  • PDF

An Efficient Storing Scheme of Real-time Large Data to improve Semiconductor Process Productivities (반도체 공정의 생산성 향상을 위한 실시간 대용량 데이터의 효율적인 저장 기법)

  • Chung, Weon-Il;Kim, Hwan-Koo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.11
    • /
    • pp.3207-3212
    • /
    • 2009
  • Automatic semiconductor manufacturing systems are demanded to improve the efficiency of the semiconductor production process. These systems include the functionalities such as the analysis and management schemes for very large real-time data in order to enhance the productivities. So, it requires the efficient storage management system to store very large real-time data. Traditional database management systems(e.g. Oracle, MY-SQL, MS-SQL) are based on disk. However, previous DBMS's have the limitation on the low storing performance. In this paper, we propose a compress-merge storing method of very large real-time data using insert transaction of a block unit. The proposed method shows better processing performances compare to conventional DBMS's. Also compress-merge method makes it possible that it can store large real-time data on low storage cost. Therefore, the proposed method can be applied to an efficient storage management system in the semiconductor production process.

General-purpose Transaction Management Technique for Data Stability of NoSQL on Distributed File System (분산 파일 시스템 기반 NoSQL의 데이터 안정성을 위한 범용 트랜잭션 관리 기법)

  • Kwon, Younghyun;Yun, Do-hyun;Park, Hojin
    • Journal of Digital Contents Society
    • /
    • v.16 no.2
    • /
    • pp.299-306
    • /
    • 2015
  • In this paper, we research to secure stability of data storing/searching on NoSQL implemented on Distributed File System. When implementing NoSQL on Distributed File System, we faced that random write on Distributed File System is almost impossible. To solve this problem, a concept of Intermediate-File was employed, and then it has been achieved that our system resist any failure circumstance. Additionally, since we discovered its performance cannot be as fast as general File System, by redefining the file block unit for our NoSQL system, we have prevented a slowdown in system performance. As a result, we are able to develop highly scalable NoSQL as Distributed File System, which fulfills basic conditions of transaction: Atomicity, Consistency, Isolation, and Performance.

A Study on the DB-IR Integration: Per-Document Basis Online Index Maintenance

  • Jin, Du-Seok;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.3
    • /
    • pp.275-280
    • /
    • 2009
  • While database(DB) and information retrieval(IR) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care, customer support, XML data management, and digital libraries. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, In the DB-IR integrations, a transaction on adding or updating a document should include maintenance of the posting lists accompanied by the document. Although DB-IR integration has been budded in the research filed, the issue will remain difficult and rewarding areas for a while. One of the primary reasons is lack of efficient online transactional index maintenance. In this paper, performance of a few strategies for per-document basis transactional index maintenance - direct index update, pulsing auxiliary index and posting segmentation index - will be evaluated. The result shows that the pulsing auxiliary strategy and posting segmentation indexing scheme, can be a challenging candidates for text field indexing in DB-IR integration.

Design and Implementation of Transactional Write Buffer Cache with Storage Class Memory (트랜잭션 단위 쓰기를 보장하는 스토리지 클래스 메모리 쓰기 버퍼캐시의 설계 및 구현)

  • Kim, Young-Jin;Doh, In-Hwan;Kim, Eun-Sam;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.247-251
    • /
    • 2010
  • Using SCM in storage systems introduce new potentials for improving I/O performance and reliability. In this paper, we study the use of SCM as a buffer cache that guarantees transactional unit writes. Our proposed method can improve storage system reliability and performance at the same time and can recover the storage system immediately upon a system crash. The Proposed method is based on the LINUX JBD(Journaling Block Device), thus reliability is equivalent to JBD. In our experiments, the file system that adopts our method shows better I/O performance even while guaranteeing high reliability and shows fast file system recovery time (about 0.2 seconds).