• Title/Summary/Keyword: 삽입 트랜잭션

Search Result 10, Processing Time 0.035 seconds

Real time Storage Manager to store very large datausing block transaction (블록 단위 트랜잭션을 이용한 대용량 데이터의 실시간 저장관리기)

  • Baek, Sung-Ha;Lee, Dong-Wook;Eo, Sang-Hun;Chung, Warn-Ill;Kim, Gyoung-Bae;Oh, Young-Hwan;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.1-12
    • /
    • 2008
  • Automatic semiconductor manufacture system generating transaction from 50,000 to 500,000 per a second needs storage management system processing very large data at once. A lot of storage management systems are researched for storing very large data. Existing storage management system is typical DBMS on a disk. It is difficult that the DBMS on a disk processes the 500,000 number of insert transaction per a second. So, the DBMS on main memory appeared to use memory. But it is difficultthat very large data stores into the DBMS on a memory because of limited amount of memory. In this paper we propose storage management system using insert transaction of a block unit that can process insert transaction over 50,000 and store data on low storage cost. A transaction of a block unit can decrease cost for a log and index per each tuple as transforming a transaction of a tuple unit to a block unit. Besides, the proposed system come cost to decompress all block of data because the information of each field be loss. To solve the problems, the proposed system generates the index of each compressed block to prevent reducing speed for searching. The proposed system can store very large data generated in semiconductor system and reduce storage cost.

  • PDF

Database Transaction Routing Algorithm Using AOP (AOP를 사용한 데이터베이스 트랜잭션 라우팅 알고리즘)

  • Kang, Hyun Sik;Lee, Sukhoon;Baik, Doo-Kwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.11
    • /
    • pp.471-478
    • /
    • 2014
  • Database replication is utilized to increase credibility, availability and prevent overload of distributed databases. Two models currently exist for replication - Master/Slave and Multi-Master. Since the Multi-Master model has problems of increasing complexity and costs to interface among multiple databases for updates and inserts, the Master/Slave model is more appropriate when frequent data inserts and updates are required. However, Master/Slave model also has a problem of not having exact criteria when systems choose to connect between Master and Slave for transactions. Therefore, this research suggests a routing algorithm based on AOP (Aspect Oriented Programming) in the Master/Slave database model. The algorithm classifies applications as cross-cutting concerns based on AOP, modularizes each concern, and routes transactions among Master and Slave databases. This paper evaluates stability and performance of the suggested algorithm through integration tests based on scenarios.

A Study on Multimedia Database Transmission Algorithm (멀티미디어 데이터베이스 전송 알고리즘에 관한 연구)

  • 최진탁
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.921-926
    • /
    • 2002
  • B+-Tree is the most popular indexing method in DBMS to manage large size data in more efficiency. However the existing B+-Tree has shortcomings in there is overhead on DISK/IO when the first time of constructing DB or of making Index, and it lessens the concurrency if there are frequent delete operations so that the index structure also should be changed frequently. To solve these problems almost DBMS is using batch construction method and lazy deletion method. But to apply B+-Tree, which is using batch construction method and lazy deletion method, into DBMS the technique for controlling and recovering concurrency is necessary, but its researching is still unsatisfactory so that there is a problem on applying it into actual systems. On this paper I suggest the technique for controlling and recovering concurrency how to implement the batch construction method and the lazy deletion method in actual DBMS. Through the suggested technique there is no cascade rollback by using Pending list, it enhances the concurrency by enabling insertion and deletion for base table on every reconstruction, and it shortens transaction response time for user by using system queue which makes the batch constructing operation is processed not in user's transaction level but in system transaction level.

  • PDF

"Q-Bone", a 3rd Generation Blockchain Platform with Enhanced Security and Flexibility (보안성 및 범용성이 강화된 3세대 블록체인 플랫폼 "큐본")

  • Im, Noh-Gan;Lee, Yo-Han;Cho, Ji-Yeon;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.791-796
    • /
    • 2020
  • In this paper, "Q-Bone", a 3rd generation blockchain platform with enhanced security and flexibility, was developed. As a 3rd generation blockchain platform, it exploits BP (block producer) to increase processing speed. It has many advantages as follows. It improves both security and speed by mixing RSA (Rivest-Shamir-Adleman) and AES (advanced encryption standard). It improves flexibility by exploiting gateway to convert between apps and blockchain with different programming language. It increases processing speed by combining whole transactions into one block and distribute it when too many transactions occur. It improves search speed by inserting sequence hash into transaction data. It was implemented and applied to pet communication service and academy-instructor-student matching service, and it was verified to work correctly and effectively. Its processing speed is 3,357 transactions/second, which shows excellent performance.

An Efficient Storing Scheme of Real-time Large Data to improve Semiconductor Process Productivities (반도체 공정의 생산성 향상을 위한 실시간 대용량 데이터의 효율적인 저장 기법)

  • Chung, Weon-Il;Kim, Hwan-Koo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.11
    • /
    • pp.3207-3212
    • /
    • 2009
  • Automatic semiconductor manufacturing systems are demanded to improve the efficiency of the semiconductor production process. These systems include the functionalities such as the analysis and management schemes for very large real-time data in order to enhance the productivities. So, it requires the efficient storage management system to store very large real-time data. Traditional database management systems(e.g. Oracle, MY-SQL, MS-SQL) are based on disk. However, previous DBMS's have the limitation on the low storing performance. In this paper, we propose a compress-merge storing method of very large real-time data using insert transaction of a block unit. The proposed method shows better processing performances compare to conventional DBMS's. Also compress-merge method makes it possible that it can store large real-time data on low storage cost. Therefore, the proposed method can be applied to an efficient storage management system in the semiconductor production process.

A Cache Consistency Control for B-Tree Indices in a Database Sharing System (데이타베이스 공유 시스템에서 B-트리 인덱스를 위한 캐쉬 일관성 제어)

  • On, Gyeong-O;Jo, Haeng-Rae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.593-604
    • /
    • 2001
  • A database sharing system (DSS) refers to a system for high performance transaction processing. In the DSS, the processing nodes are coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches data pages and index pages in its memory buffer. In general, B-tree index pages are accessed more often and thus cached at more processing nodes, than their corresponding data pages. There are also complicated operations in the B-tree such as Fetch, Fetch Next, Insertion and Deletion. Therefore, an efficient cache consistency scheme supporting high level concurrency is required. In this paper, we propose cache consistency schemes using identifiers of index pages and page_LSN of leaf page. The propose schemes can improve the system throughput by reducing the required message traffic between nodes and index re-traversal.

  • PDF

A Method of Generating Theme, Background and Signal Music Usage Monitoring Information Based on Blockchain

  • Kim, Young-Mo;Park, Byeong-Chan;Bang, Kyung-Sik;Kim, Seok-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we propose a method of generating theme, background amd signal music usage monitoring information based on a blockchain, in which the music usage informations are recorded by the monitoring tool using feature-based filtering of monitoring organizations. Theme, background and signal music are music inserted into the broadcasting contents of broadcaster. Since they are recognized as created contents just like normal music, there are lyricists and composers who have the right for those music and all copyright holders of them have to receive the corresponding copyright fees, once the music was used in the broadcast. However, there are problems with inaccurate monitoring results for music usage, due to the omission of usage details and non-transparent settlement method. In order to solve these problems, If the information generation method proposed in this paper, accurate music usage history can be created, the details are stored in the blockchain without changes or omissions, and transparent settlement and distribution are possible by smart contract, avoiding the current non-transparent settlement method.

Implementation of Rule Management System for Validating Spatial Object Integrity (공간 객체 무결성 검증을 위한 규칙 관리 시스템의 구현)

  • Go, Goeng-Uk;Yu, Sang-Bong;Kim, Gi-Chang;Cha, Sang-Gyun
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.12
    • /
    • pp.1393-1403
    • /
    • 1999
  • 공간 데이타베이스 시스템을 통하여 공유되는 공간 데이타는 무결성이 적절하게 유지되지 않는 한 전체 응용 시스템의 행위를 예측할 수 없게 되므로 데이타의 무결성 확인 및 유지는 필수적이다. 특히 공공 GIS에 저장된 공간 데이타는 토지 이용도 평가, 도시 계획, 자원 관리, 시설물 관리, 안전 관리, 국방 등 국가 전체 및 지역의 중요한 정책 결정을 위한 다양한 응용 시스템들에 의해 이용되므로 적절한 공간 객체의 무결성 확인이 더욱 더 필요하다. 본 논문에서는 능동(active) DBMS의 능동 규칙(active rule) 기법을 이용하여 공간 객체의 무결성 확인을 지원하기 위한 규칙 관리 시스템을 제시한다. 능동 규칙을 이용한 공간 객체의 무결성 확인은 응용 프로그래머를 무결성 확인에 대한 부담으로부터 자유롭게 할 수 있다. 본 시스템은 특정 DBMS에 종속되지 않는 독립적인 외부 시스템으로 존재하며, 능동 규칙 관리기, 규칙 베이스, 그리고 활성규칙 생성기의 3 부분으로 구성된다. 사용자가 공간 데이타베이스 응용 프로그램을 통해 공간 객체를 조작하고자 할 때, 본 시스템은 데이타베이스 트랜잭션을 단위로 조작되는 모든 공간 객체의 무결성 확인을 위해 응용 프로그램에 삽입될 무결성 제약조건 규칙들을 효율적으로 관리하는 역할을 한다.Abstract It is necessary that the integrity of spatial data shared through the spatial database system is validated and appropriately maintained, otherwise the activity of whole application system is unpredictable. Specially, the integrity of spatial data stored in public GIS has to be validated, because those data are used by various applications which make a decision on an important policy of the region and/or whole nation such as evaluation of land use, city planning, resource management, facility management, risk management/safety supervision, national defense. In this paper, we propose rule management system to support validating the integrity of spatial object, using the technique of active rule technique from active DBMS. Validating data integrity using active rules allows database application programmer to be free from a burden on validation of the data integrity. This system is an independent, external system that is not subject to specific DBMS and consists of three parts, which are the active rule manager, the rule base, and the triggered rule generator. When an user tries to manipulate spatial objects through a spatial database application program, this system serves to efficiently manage integrity rules to be inserted into the application program to validate the integrity constraints of all the spatial objects manipulated by database transactions.

Page Logging System for Web Mining Systems (웹마이닝 시스템을 위한 페이지 로깅 시스템)

  • Yun, Seon-Hui;O, Hae-Seok
    • The KIPS Transactions:PartC
    • /
    • v.8C no.6
    • /
    • pp.847-854
    • /
    • 2001
  • The Web continues to grow fast rate in both a large aclae volume of traffic and the size and complexity of Web sites. Along with growth, the complexity of tasks such as Web site design Web server design and of navigating simply through a Web site have increased. An important input to these design tasks is the analysis of how a web site is being used. The is paper proposes a Page logging System(PLS) identifying reliably user sessions required in Web mining system PLS consists of Page Logger acquiring all the page accesses of the user Log processor producing user session from these data, and statements to incorporate a call to page logger applet. Proposed PLS abbreviates several preprocessing tasks which spends a log of time and efforts that must be performed in Web mining systems. In particular, it simplifies the complexity of transaction identification phase through acquiring directly the amount of time a user stays on a page. Also PLS solves local cache hits and proxy IPs that create problems with identifying user sessions from Web sever log.

  • PDF