• Title/Summary/Keyword: replicated server

Search Result 33, Processing Time 0.02 seconds

Concurrency Control and Consistency Maintenance of Cached Spatial Data in Client-Server Environment (클라이언트-서버 환경에서 캐쉬된 공간 데이터의 동시성 제어 및 일관성 유지 기법)

  • Shin, Young-Sang;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.512-527
    • /
    • 2001
  • In a client-server spatial database, it is desirable to maintain the cached data in a client side to minimize the communication overhead across a network. This paper deals with the issues of concurrency and consistency of map updates in this environment. A client transaction to update map data is an interactive work and takes a long time to complete it. The map update in a client site may affect the other sites'updates because of dependencies between spatial data stored at different sites. The concurrent updates should be propagated to the other clients as well as the server to keep the consistency of map replicated in a client cache, and also the communication overhead of the update propagation should be minimized not to lose the benefit of caching. The newly proposed cache region locking with CR lock and CX lock controls the update dependency due to spatial relationships. CS lock and COD lock are suggested to use optimistic detection-based approaches for guaranteeing the consistency of cached client data. The cooperative update protocol uses these extended locking primitives and Spatial Relationship-based 2PC (SR-based 2PC). This paper argues that the concurrent updates of cached client spatial data can be achieved by deciding on collaborative updates or independent updates based on spatial relationships.

  • PDF

Data Replicas Relocation Strategy in Mobile Computing System Environment (이동 컴퓨팅 시스템 환경에서 데이터 복제 재배치 기법)

  • Choe, Gang-Hui;Jo, Tae-Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2587-2596
    • /
    • 1999
  • Recently, by the extension of skills like LAN, the wireless telecommunication network and the satelite services make it possible for the mobile computer users to access a database. A method to use the replicated database on a server to get new data without missing any information has been being studied. So far we have used the Static Replica Allocation(SRA) for the replication which is the method of the replication on the server. This method is to replicate the data on the replica server after a moving host is transferred to a cell. Since the network of the SRA is very good, and if there are few moving users, no trouble will happen. But if there is no moving users in a cell, the data will not be shared. Therefore, this paper is about the study of the method of relocation after replicating the data to the cells for the users(User Select Replica Allocation : USRA). We also analyze the access rate and the possibility which are closely related to the moving frequency of the mobile hosts and the numbers of the cells. As a result, We show that the 120% lower access cost and the 40%∼50% gains are achieved from the low mobility

  • PDF

A Dual Processing Load Shedding to Improve The Accuracy of Aggregate Queries on Clustering Environment of GeoSensor Data Stream (클러스터 환경에서 GeoSensor 스트림 데이터의 집계질의의 정확도 향상을 위한 이중처리 부하제한 기법)

  • Ji, Min-Sub;Lee, Yeon;Kim, Gyeong-Bae;Bae, Hae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.31-40
    • /
    • 2012
  • u-GIS DSMSs have been researched to deal with various sensor data from GeoSensors in ubiquitous environment. Also, they has been more important for high availability. The data from GeoSensors have some characteristics that increase explosively. This characteristic could lead memory overflow and data loss. To solve the problem, various load shedding methods have been researched. Traditional methods drop the overloaded tuples according to a particular criteria in a single server. Tuple deletion sensitive queries such as aggregation is hard to satisfy accuracy. In this paper a dual processing load shedding method is suggested to improve the accuracy of aggregation in clustering environment. In this method two nodes use replicated stream data for high availability. They process a stream in two nodes by using a characteristic they share stream data. Stream data are synchronized between them with a window as a unit. Then, processed results are merged. We gain improved query accuracy without data loss.