• Title/Summary/Keyword: 데이타베이스 트랜잭션

Search Result 107, Processing Time 0.02 seconds

Development of a Storage System for Mass Location Information (대용량 위치정보 저장시스템 개발)

  • Kim, Dong-Oh;Ju, Sung-Wan;Hong, Dong-Sook;Han, Ki-Joon
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.105-112
    • /
    • 2004
  • 최근 이동 객체의 위치정보를 활용한 위치 찾기 서비스, 교통 정보 서비스, 긴급 구조 서비스, 모바일 광고 서비스와 같은 위치 기반 서비스가 부각되고 있다. 이와 같은 다양한 위치 기반 서비스를 제공하기 위해서는 대용량의 이동 객체의 위치정보를 신속하게 저장, 검색, 갱신할 수 있는 저장시스템이 필수적으로 요구된다. 그러나, 이동 객체 위치정보 저장시스템으로 기존의 데이타베이스 시스템을 사용할 경우 불필요한 트랜잭션 연산으로 인하여 저장 및 검색 시 오버헤드가 발생하고, 위치 기반 서비스에 필요한 다양한 질의 및 요구사항을 지원하지 못한다는 문제점이 있다. 따라서, 본 논문에서는 대용량의 위치정보를 효과적으로 저장 및 검색할 수 있으며, 위치 기반 서비스에서 요구하는 궤적 질의 기능, 위치 트리거 기능, 위치 보정 기능, 이동 객체 아이디 기반 클러스터 기능 등을 지원하는 대용량 위치정보 저장시스템을 개발하였다. 또한, 대용량 위치정보 저장시스템의 성능 평가를 위해서 상용 데이타베이스 시스템인 SQL-Server와 비교 실험하여 성능의 우수함을 입증하였다.

  • PDF

Data-Driven Exploration for Transient Association Rules (한시적 연관규칙을 위한 데이타 주도 탐사 기법)

  • Cho, Ll-Rae;Kim, Jong-Deok;Lee, Do-Heon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.4
    • /
    • pp.895-907
    • /
    • 1997
  • The mining of assciation rules disovers the trndency of events ocuring simultaneously in large databases. Previous announced research on association rules deals with associations with associations with respect to the whole transaction. However, xome association rules could have very high confidence in a sub-range of the time domain, even though they do not have quite high confidence in the whole time domain. Such kind of association rules are ecpected to be very usdful in various decion making problems.In this paper, we define transient association rule, as an association with high cimfidence worthy of special attention in a partial time interval, and propose an dfficeint algorithm wich finds out the time intervals appropriate to transient association rules from large-databases.We propose the data-driven retrival method excluding unecessary interval search, and design an effective data structure manageable in main memory obtined by one scanning of database, which offers the necessary information to next retrieval phase. In addition, our simulation shows that the suggested algorithm has reliable performance at the time cost acceptable in application areas.

  • PDF

Four Consistency Levels in Trigger Processing (트리거 처리 4 단계 일관성 레벨)

  • ;Eric Hanson
    • Journal of KIISE:Databases
    • /
    • v.29 no.6
    • /
    • pp.492-501
    • /
    • 2002
  • An asynchronous trigger processor (ATP) is a oftware system that processes triggers after update transactions to databases are complete. In an ATP, discrimination networks are used to check the trigger conditions efficiently. Discrimination networks store their internal states in memory nodes. TriggerMan is an ATP and uses Gator network as the .discrimination network. The changes in databases are delivered to TriggerMan in the form of tokens. Processing tokens against a Gator network updates the memory nodes of the network and checks the condition of a trigger for which the network is built. Parallel token processing is one of the methods that can improve the system performance. However, uncontrolled parallel processing breaks trigger processing semantic consistency. In this paper, we propose four trigger processing consistency levels that allow parallel token processing with minimal anomalies. For each consistency level, a parallel token processing technique is developed. The techniques are proven to be valid and are also applicable to materialized view maintenance.

Dynamic Multiversion Control in Miltilevel Security Enviroments (다단계 보안 환경에서 동적 다중 버전 제어)

  • Jeong, Hyeon-Cheol;Hwang, Bu-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.659-669
    • /
    • 1997
  • Security as weel as consistency of data is very important issue in databaed security.This the serializability of transactions must be maintained and particularly covert channel not caesed between a high-level transaction and a low- level one.In this paper, we propose a secure transaction management algorithm using synamic version control] method that can slove disk space overhead to maintain multiversion and the problem that reansactions read too old versions when two versions are maintained.Disk space overhead can be sloved by properly cerating versions and synamically maintaining the number of versions and the problem for reading too old version can be solved by having transactions read versions as recent as possible.

  • PDF

Implementation of Rule Management System for Validating Spatial Object Integrity (공간 객체 무결성 검증을 위한 규칙 관리 시스템의 구현)

  • Go, Goeng-Uk;Yu, Sang-Bong;Kim, Gi-Chang;Cha, Sang-Gyun
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.12
    • /
    • pp.1393-1403
    • /
    • 1999
  • 공간 데이타베이스 시스템을 통하여 공유되는 공간 데이타는 무결성이 적절하게 유지되지 않는 한 전체 응용 시스템의 행위를 예측할 수 없게 되므로 데이타의 무결성 확인 및 유지는 필수적이다. 특히 공공 GIS에 저장된 공간 데이타는 토지 이용도 평가, 도시 계획, 자원 관리, 시설물 관리, 안전 관리, 국방 등 국가 전체 및 지역의 중요한 정책 결정을 위한 다양한 응용 시스템들에 의해 이용되므로 적절한 공간 객체의 무결성 확인이 더욱 더 필요하다. 본 논문에서는 능동(active) DBMS의 능동 규칙(active rule) 기법을 이용하여 공간 객체의 무결성 확인을 지원하기 위한 규칙 관리 시스템을 제시한다. 능동 규칙을 이용한 공간 객체의 무결성 확인은 응용 프로그래머를 무결성 확인에 대한 부담으로부터 자유롭게 할 수 있다. 본 시스템은 특정 DBMS에 종속되지 않는 독립적인 외부 시스템으로 존재하며, 능동 규칙 관리기, 규칙 베이스, 그리고 활성규칙 생성기의 3 부분으로 구성된다. 사용자가 공간 데이타베이스 응용 프로그램을 통해 공간 객체를 조작하고자 할 때, 본 시스템은 데이타베이스 트랜잭션을 단위로 조작되는 모든 공간 객체의 무결성 확인을 위해 응용 프로그램에 삽입될 무결성 제약조건 규칙들을 효율적으로 관리하는 역할을 한다.Abstract It is necessary that the integrity of spatial data shared through the spatial database system is validated and appropriately maintained, otherwise the activity of whole application system is unpredictable. Specially, the integrity of spatial data stored in public GIS has to be validated, because those data are used by various applications which make a decision on an important policy of the region and/or whole nation such as evaluation of land use, city planning, resource management, facility management, risk management/safety supervision, national defense. In this paper, we propose rule management system to support validating the integrity of spatial object, using the technique of active rule technique from active DBMS. Validating data integrity using active rules allows database application programmer to be free from a burden on validation of the data integrity. This system is an independent, external system that is not subject to specific DBMS and consists of three parts, which are the active rule manager, the rule base, and the triggered rule generator. When an user tries to manipulate spatial objects through a spatial database application program, this system serves to efficiently manage integrity rules to be inserted into the application program to validate the integrity constraints of all the spatial objects manipulated by database transactions.

An Efficient Scheme of Performing Pending Actions for the Removal of Datavase Files (데이터베이스 파일의 삭제를 위한 미처리 연산의 효율적 수행 기법)

  • Park, Jun-Hyun;Park, Young-Chul
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.494-511
    • /
    • 2001
  • In the environment that database management systems manage disk spaces for storing databases directly, this paper proposes a correct and efficient scheme of performing pending actions for the removal of database files. As for performing pending actions, upon performing recovery, the recovery process must identify unperformed pending actions of not-yet-terminated transactions and then perform those actions completely. Making the recovery process identify those actions through the analysis of log records in the log file is the basic idea of this paper. This scheme, as an extension of the execution of transactions, fuzzy checkpoint, and recovery of ARIES, uses the following methods: First, to identify not-yet-terminated transactions during recovery, transactions perform pending actions after writing 'pa_start'log records that signify both the commit of transactions and the start of executing pending actions, and then write 'eng'log records. Second, to restore pending-actions-lists of not-yet-terminated transactions during recovery, each transaction records its pending-actions-list in 'pa_start'log record and the checkpoint process records pending-actions-lists of transactions that are decided to be committed in 'end_chkpt'log record. Third, to identify the next pending action to perform during recovery, whenever a page is updated during the execution of pending actions, transactions record the information that identifies the next pending action to perform in the log record that has the redo information against the page.

  • PDF

Recovery Schemes for Spatial Data Update Transactions in Client-Server Computing Environments (클라이언트-서버 환경에서 공간 데이터의 변경 트랜잭션을 위한 회복 기법)

  • 박재관;최진오;홍봉희
    • Journal of KIISE:Databases
    • /
    • v.30 no.1
    • /
    • pp.64-79
    • /
    • 2003
  • In client-server computing environments, update transactions of spatial data have the following characteristics. First, a transaction to update maps needs interactive work, and therefore it nay take a long time to finish. Second, a long transaction should be allowed to read the dirty data to enhance parallelism of executing concurrent transactions. when %he transaction is rolled back, it should guarantee the cascading rollback of all of the dependent transactions. Finally, two spatial objects may have a weak dependency constraint, called the spatial relationship, based on geometric topology. The existing recovery approaches cannot be directly applied to this environment, due to the high rollback cost and the overhead of cascading rollbacks. Furthermore, the previous approaches cannot guarantee the data integrity because the spatial relationship, which is a new consistency constraint of spatial data, is not considered. This paper presents new recovery schemes for update transactions of spatial data. To guarantee the data integrity, this paper defines recovery dependency as a rendition of cascading rollbacks. The partial-rollback is alto suggested to solve the problem of high rollback cost. The recovery schemes proposed in this paper can remove the unnecessary cascading rollbacks by using undo-delta, partial -redo and partial-undo. Finally, the schemes are performed to ensure the correctness.

Avoidance-Based Cache Consistency Technique Using an Asynchronous Write Intension Declaration (비동기적 갱신 선언을 이용한 회피-기반 캐쉬 일관성 유지 기법)

  • Jang, Chang-Bok;Cho, Sung-Hoon;Kang, Woo-Suck;Kim, Dong-Hyuk;Lee, Chan-Seob;Park, Yong-Moon;Choi, Eui-In
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.23-26
    • /
    • 2002
  • 클라이언트/서버 데이타베이스 시스템 환경이 대중화됨에 따라 클라이언트 캐쉬 데이터의 일관성을 유지하기 위한 기법들이 많이 제안되고 있다. 기존의 회피-기반 기법들은 갱신의도(write intuition) 선언을 동기적(synchronous)으로 수행하는 CB(Callback) 기법과 지연(defereed)하는 O2PL(Optimistic 2-Phase Locking) 기법을 기반으로 연구가 이루어졌다. 본 논문에서는 회피-기반(avoidance-based)에서 서버에게 갱신의도 선언을 비동기적으로 수행하는 캐쉬 일관성 유지 기법을 제안한다. 본 논문에서 제안한 기법은 갱신의도 선언을 비동기적으로 수행함으로 서버의 응답을 기다리지 않고 트랜잭션 처리를 수행함으로써 좋은 성능을 보이고, 트랜잭션 철회(abort)율이 낮다는 장점을 갖는다.

  • PDF

A Vertical File Partitioning Method Allowing Attribute Replications (속성 중복을 허용한 파일 수직분할 방법)

  • 유종찬;김재련
    • The Journal of Information Technology and Database
    • /
    • v.4 no.2
    • /
    • pp.3-19
    • /
    • 1998
  • 관계형 데이터베이스 성능을 향상시키는데 중요한 요소 중의 하나는 트랜잭션을 처리하기 위해 데이터를 디스크에서 주 기억장치로 옮기는데 필요한 디스크 액세스(access) 횟수이다. 본 연구는 관계형 데이터베이스에서 트랜잭션을 처리할 때, 릴레이션(relation)을 속성의 중복할당을 허용하여 분할하고, 디스크에 단편(fragment)으로 저장하므로써 필요한 단편만을 액세스하여 디스크의 액세스 횟수를 줄이는 방법을 연구하였다. 본 연구에서는 속성의 중복할당을 허용하여 디스크의 액세스 횟수를 최소화시킬 수 있는 수직분할문제에 수리모형을 조회, 갱신트랙잭션을 모두 고려하여 0-1 정수계획법으로 개발하였다. 또한 모형에 대한 최적해법으로 분지한계법을 제안하였으며, 분지한계법으로 큰 문제를 푸는데는 많은 시간이 소요되므로 계산량을 줄일 수 있는 초기처리방법과 비용계산방법을 제안하였다. 속성의 중복을 허용하여 구한 해가 중복을 고려하지 않은 경우의 해보다 디스크 액세스횟수가 감소한 것으로 나타났으며, 갱신트랜?션의 수가 증가함에 따라 중복되는 속성의 수가 감소하는 결과를 나타내었다.

A Dynamic Transaction Routing Algorithm with Primary Copy Authority (주사본 권한을 이용한 동적 트랜잭션 분배 알고리즘)

  • Kim, Ki-Hyung;Cho, Hang-Rae;Nam, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1067-1076
    • /
    • 2003
  • Database sharing system (DSS) refers to a system for high performance transaction processing. In DSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. In this paper, we propose a dynamic transaction routing algorithm to balance the load of each node in the DSS. The proposed algorithm is novel in the sense that it can support node-specific locality of reference by utilizing the primary copy authority assigned to each node; hence, it can achieve better cache hit ratios and thus fewer disk I/Os. Furthermore, the proposed algorithm avoids a specific node being overloaded by considering the current workload of each node. To evaluate the performance of the proposed algorithm, we develop a simulation model of the DSS, and then analyze the simulation results. The results show that the proposed algorithm outperforms the existing algorithms in the transaction processing rate. Especially the proposed algorithm shows better performance when the number of concurrently executed transactions is high and the data page access patterns of the transactions are not equally distributed.