• Title/Summary/Keyword: Update Transaction

Search Result 62, Processing Time 0.025 seconds

Selective Redo recovery scheme for fine-Granularity Locking in Database Management (데이터베이스 관리 시스템에서 섬세 입자 잠금기법을 위한 선택적 재수행 회복기법)

  • 이상희
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.2
    • /
    • pp.27-33
    • /
    • 2001
  • In this thesis, we present a simple and efficient recovery method, called ARIES/SR(ARIES/Selective Redo) which is based on ARIES(Algorithm for Recovery and Isolation Exploiting Semantics) ARIES performs redo for all updates done by either nonloser transaction or loser transaction, and thus significant overhead appears during restart after a system failure. To reduce this overhead, we propose ARIES/SR recovery algorithm. In this algorithm, to reduce the redo operations, redo is performed, using log record for updates done by only nonloser transaction. Also selective undo is performed. using log record for update done by only loser transaction for reducing recovery operation.

Recovery Schemes for Spatial Data Update Transactions in Client-Server Computing Environments (클라이언트-서버 환경에서 공간 데이터의 변경 트랜잭션을 위한 회복 기법)

  • 박재관;최진오;홍봉희
    • Journal of KIISE:Databases
    • /
    • v.30 no.1
    • /
    • pp.64-79
    • /
    • 2003
  • In client-server computing environments, update transactions of spatial data have the following characteristics. First, a transaction to update maps needs interactive work, and therefore it nay take a long time to finish. Second, a long transaction should be allowed to read the dirty data to enhance parallelism of executing concurrent transactions. when %he transaction is rolled back, it should guarantee the cascading rollback of all of the dependent transactions. Finally, two spatial objects may have a weak dependency constraint, called the spatial relationship, based on geometric topology. The existing recovery approaches cannot be directly applied to this environment, due to the high rollback cost and the overhead of cascading rollbacks. Furthermore, the previous approaches cannot guarantee the data integrity because the spatial relationship, which is a new consistency constraint of spatial data, is not considered. This paper presents new recovery schemes for update transactions of spatial data. To guarantee the data integrity, this paper defines recovery dependency as a rendition of cascading rollbacks. The partial-rollback is alto suggested to solve the problem of high rollback cost. The recovery schemes proposed in this paper can remove the unnecessary cascading rollbacks by using undo-delta, partial -redo and partial-undo. Finally, the schemes are performed to ensure the correctness.

Metadata Management Method for Consistency and Recency in Digital Library (디지탈 도서관 환경에서 일관성과 최근성을 고려한 메타데이타 관리 방법)

  • Lee, Hai-Min;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.22-32
    • /
    • 2000
  • The Digital Library is the integrated system of Information Retrieval System(IRS) and Database Management system(DBMS). In the Digital Library environment where dynamic query and update processes are required, however, the existing transaction management methods cause the following problems. First, since the traditional consistency criteria is too restrictive, it causes increment of query processing time and cannot guarantee the reflection of recency. Second, query result could be unreliable because the consistency criteria between source data and metadata is not defined. This paper models the access to metadata based on Dublin Core as query transactions and update transactions, and gives the efficient method to manage them. Particularly, this paper describes the consistency criteria of metadata which takes it Into consideration the consistency between the result of query transaction and status of source data in the Digital Library, that is different from the consistency criteria in traditional transaction management. It also presents analysis of the view point of query transaction to reflect recency and proposes metadata management to guarantee recency within metadata consistency.

  • PDF

Concurrency Control for Updating a Large Spatial Object (큰 공간 객체의 변경을 위한 동시성 제어)

  • Seo Young Duk;Kim DongHyun;Hong Bong Hee
    • Journal of KIISE:Databases
    • /
    • v.32 no.1
    • /
    • pp.100-110
    • /
    • 2005
  • The update transactions to be executed in spatial databases usually have been known as interactive and long duration works. To improve the parallelism of concurrent updates, it needs multiple transactions concurrently update a large spatial object which has a spatial extensions larger than workspace of a client. However, under the existing locking protocols, it is not possible to concurrently update a large spatial object because of conflict of a write lock This paper proposes a partial locking scheme of enabling a transaction to set locks on parts of a big object. The partial locking scheme which is an exclusive locking scheme set by user, acquires locks for a part of the big object to restrict the unit of concurrency control to a partial object of a big object. The scheme gives benefits of improving the concurrency of un updating job for a large object because it makes the lock control granularity finer.

Distributed Mobile Streaming Service using Grouping-based Fuzzy Reference Scheme (그룹화 기반의 퍼지 참조 기법을 이용한 분산 모바일 스트리밍 서비스)

  • Jeong, Taeg-Won;Lee, Chong-Deuk
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.533-541
    • /
    • 2008
  • In distributed mobile systems, the congestion control and disconnection problems are current major issues. This paper proposes a grouping-based fuzzy reference streaming method to improve the performance of systems supporting distributed mobile transactions. The proposed method resolves transaction requests issued by mobile clients using the GS interface. In the paper disconnection problems are resolved efficiently using transaction read and update for improved streaming service. Experimental results show that the proposed method outperforms the other existing methods significantly.

  • PDF

A Recovery Scheme of Mobile Transaction Based on Updates Propagation for Updating Spatial Data (공간데이터를 변경하는 모바일 트랜잭션의 변경 전파 회복 기법)

  • Kim, Dong-Hyun;Kang, Ju-Ho;Hong, Bong-Hee
    • Journal of Korea Spatial Information System Society
    • /
    • v.5 no.2 s.10
    • /
    • pp.69-82
    • /
    • 2003
  • Mobile transactions updating spatial objects are long transactions that update local objects of mobile clients during disconnection. Since a recovered transaction cannot read the write sets of other transactions committed before the recovery due to disconnection, the recovered transaction may conflicts with them. However, aborting of the recovered long transaction leads to the cancellation of all updates including the recovered updates. It is definitely unsuitable to cancel the recovered updates due to the conflicts. In this paper, we propose the recovery scheme to retrieve foreign conflictive objects from the write sets of other transactions for reducing aborting of a recovered transaction. The foreign conflictive objects are part of the data committed by other transactions and may conflict with the objects updated by the recovered transaction. In the scheme, since the recovered transaction can read both the foreign conflictive objects and the recently checkpoint read set, it is possible to reupdate properly the potentially conflicted objects.

  • PDF

A database design using denormalization in relational database (관계형 데이터베이스에서 비정규화를 사용한 데이터베이스 설계)

  • 장영관;강맹규
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.172-178
    • /
    • 1996
  • Databases are critical to business information systems, and RDBMS is most widely usded for the database system. Normalization has been designed to control various anomalies(insert, update, and delete anomalies). However, normalized databese design does not account for the tradeoffs necessary for the performance. In this research, we develop a model for database desin by denormalization of duplicating attributes in order to reduce frequent join processes. In this model, we consider insert, update, and delete costs. The anomalies are treated by additional disk I/O which is necessary for each insert and update transaction. We propose a branch and bound method for this model, and show considerable cost reduction.

  • PDF

A Database Design without Storage Constraint Considering Denormalization in Relational Database (관계형 데이터베이스에서 저장용량에 제약이 없는 경우 비 정규화를 고려한 데이터베이스 설계)

  • 장영관;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.19 no.37
    • /
    • pp.251-261
    • /
    • 1996
  • Databases are critical to business information systems and RDBMS is most widely used for the database system. Normalization was designed to control various anomalies(insert, update, and delete anomalies). However normalized database design does not account for the tradeoffs necessary for the performance reason. In this research, we model a database design problem without storage constraint. Given a normalized database design, in this model, we do the denormalization of duplicating columns in order in reduce frequent join processes. In this paper, we consider insert, update, delete, and storage cost, and the anomalies are treated by additional disk I/O cost necessary for each insert, update transaction. We propose a branch and bound method, and show considerable cost reduction.

  • PDF

An Optimal Database Design Considering Denormalization in Relational Database (관계형 데이터베이스에서 비정규화를 고려한 최적 데이터베이스 설계)

  • 장영관;강맹규
    • The Journal of Information Technology and Database
    • /
    • v.3 no.1
    • /
    • pp.3-24
    • /
    • 1996
  • Databases are critical to business information systems, and RDBMS is most widely used for the database system. Normalization has been designed to control various anomalies(insert, update, and delete anomalies). However, normalized database design does not account for the tradeoffs necessary for the performance. In this research, we develop a model for database design by denormalization of duplicating attributes in order to reduce frequent join processes. In this mood, we consider insert, update, delete, and query costs. The anomaly and data inconsistency are removed by additional disk I/O which is necessary for each update and insert transaction. We propose a branch and bound method for this model, and show considerable cost reduction.

  • PDF

Concurrency Control Using the Update Graph in Replicated Database Systems (중복 데이터베이스 시스템에서 갱신그래프를 이용한 동시성제어)

  • Choe, Hui-Yeong;Lee, Gwi-Sang;Hwang, Bu-Hyeon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.587-602
    • /
    • 2002
  • Replicated database system was emerged to resolve the problem of reduction of the availability and the reliability due to the communication failures and site errors generated at centralized database system. But if update transactions are many occurred, the update is equally executed for all replicated data. Therefore, there are many problems the same thing a message overhead generated by synchronization and the reduce of concurrency happened because of delaying the transaction. In this paper, I propose a new concurrency control algorithm for enhancing the degree of parallelism of the transaction in fully replicated database designed to improve the availability and the reliability. To improve the system performance in the replicated database should be performed the last operations in the submitted site of transactions and be independently executed update-only transactions composed of write-only transactions in all sites. I propose concurrency control method to maintain the consistency of the replicated database and reflect the result of update-only transactions in all sites. The superiority of the proposed method has been tested from the respondence and withdrawal rate. The results confirm the superiority of the proposed technique over classical correlation based method.