• Title/Summary/Keyword: Data replication

Search Result 321, Processing Time 0.025 seconds

Efficient Method for Improving Data Accessibility in VANET (VANET환경에서의 효율적인 데이터 접근성 향상기법)

  • Shim, Kyu-Sun;Lee, Myong-Soo;Lee, Sang-Keun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.1
    • /
    • pp.65-75
    • /
    • 2009
  • A Vehicular Ad-Hoc Network (VANET) is a form of Mobile ad-hoc network, to provide temporary communications among nearby vehicles. Mobile node of VANET consumes energy and resource with participating in the member of network. Some node tends to have a selfishness to place one's own profits above cooperation with others. As result of selfish node, it reduces data accessibility and the efficiency of networks. In this paper, we propose noble method, Friendship-VaR that excludes selfish nodes from a group of VANET. Friendship-VaR enables to improve data accessibility by eliminating selfish nodes and sharing data among reliable nodes. Friendship-VaR determines selfishness of nodes by simple data exchange. The experiments shows proposed method outperform existing method in terms of data accessibility.

  • PDF

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

The Mechanism of Poly I:C-Induced Antiviral Activity in Peritoneal Macrophage

  • Pyo, Suh-Kenung
    • Archives of Pharmacal Research
    • /
    • v.17 no.2
    • /
    • pp.93-99
    • /
    • 1994
  • Macrtophages play an important role in defense against virus infection by intrinsic resistance and by extrinsic resistance. Since interferon-induced enzymes which are 2'-5' oligoadenylate synthetase and p1/eIF-2 protein kinase have been shown to be involved in the inhibition of viral replication, I examined the mechanism by which poly I:C, an interferon inducer, exerts its antiviral effects in inflammatory macrophages infected with herpes simplex virus type 1 (HSV-1). The data presented here demonstrate that poly I:C-induced antiviral activity is partially due to the activation of 2'-5' pligoadenylate synthetase. The activation of 2'-5' oligoadenlate A synthetase by poly I:C is also at least mediated via the production of interferon-.betha.. Taken together, these data indicate that interferon-.betha. produced in response to poly I:C acts in an autocrine manner to activate the 2'-5' oligoadenylate synthetase and to induce resistance to HSV-1.

  • PDF

A Study on Real Time Asynchronous Data Duplication Method for the Combat System (전투체계 시스템을 위한 실시간 환경에서의 비동기 이중화 기법 연구)

  • Lee, Jae-Sung;Ryu, Jon-Ha
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.2
    • /
    • pp.61-68
    • /
    • 2007
  • In a naval combat system, the information processing node is a key functional equipment and performs major combat management functions including control sensor and weapon systems. Therefore, a failure of one of the node causes fatal impacts on overall combat system capability. There were many methodologies to enhance system availability by reducing the impact of system failure like a fault tolerant method. This paper proposes a fault tolerant mechanism for information processing node using a replication algorithm with hardware duplication. The mechanism is designed as a generic algorithm and does not require any special hardware. Therefore all applications in combat system can use this functionality. The asynchronous characteristic of this mechanism provides the capability to adapt this algorithm to the module which has low performance hardware.

Stochastic simulation based on copula model for intermittent monthly streamflows in arid regions

  • Lee, Taesam;Jeong, Changsam;Park, Taewoong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.488-488
    • /
    • 2015
  • Intermittent streamflow is common phenomenon in arid and semi-arid regions. To manage water resources of intermittent streamflows, stochactic simulation data is essential; however the seasonally stochastic modeling for intermittent streamflow is a difficult task. In this study, using the periodic Markov chain model, we simulate intermittent monthly streamflow for occurrence and the periodic gamma autoregressive and copula models for amount. The copula models were tested in a previous study for the simulation of yearly streamflow, resulting in successful replication of the key and operational statistics of historical data; however, the copula models have never been tested on a monthly time scale. The intermittent models were applied to the Colorado River system in the present study. A few drawbacks of the PGAR model were identified, such as significant underestimation of minimum values on an aggregated yearly time scale and restrictions of the parameter boundaries. Conversely, the copula models do not present such drawbacks but show feasible reproduction of key and operational statistics. We concluded that the periodic Markov chain based the copula models is a practicable method to simulate intermittent monthly streamflow time series.

  • PDF

Change Reconciliation on XML Repetitive Data (XML 반복부 데이터의 변경 협상 방법)

  • Lee Eunjung
    • The KIPS Transactions:PartA
    • /
    • v.11A no.6
    • /
    • pp.459-468
    • /
    • 2004
  • Sharing XML trees on mobile devices has become more and more popular. Optimistic replication of XML trees for mobile devices raises the need for reconciliation of concurrently modified data. Especially for reconciling the modified tree structures, we have to compare trees by node mapping which takes O($n^2$) time. Also, using semantic based conflict resolving policy is often discussed in the literature. In this research, we focused on an efficient reconciliation method for mobile environments, using edit scripts of XML data sent from each device. To get a simple model for mobile devices, we use the XML list data sharing model, which allows inserting/deleting subtrees only for the repetitive parts of the tree, based on the document type. Also, we use keys for repetitive part subtrees, keys are unique between nodes with a same parent. This model not only guarantees that the edit action always results a valid tree but also allows a linear time reconciliation algorithm due to key based list reconciliation. The algorithm proposed in this paper takes linear time to the length of edit scripts, if we can assume that there is no insertion key conflict. Since the previous methods take a linear time to the size of the tree, the proposed method is expected to provide a more efficient reconciliation model in the mobile environment.

Data Replicas Relocation Strategy in Mobile Computing System Environment (이동 컴퓨팅 시스템 환경에서 데이터 복제 재배치 기법)

  • Choe, Gang-Hui;Jo, Tae-Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2587-2596
    • /
    • 1999
  • Recently, by the extension of skills like LAN, the wireless telecommunication network and the satelite services make it possible for the mobile computer users to access a database. A method to use the replicated database on a server to get new data without missing any information has been being studied. So far we have used the Static Replica Allocation(SRA) for the replication which is the method of the replication on the server. This method is to replicate the data on the replica server after a moving host is transferred to a cell. Since the network of the SRA is very good, and if there are few moving users, no trouble will happen. But if there is no moving users in a cell, the data will not be shared. Therefore, this paper is about the study of the method of relocation after replicating the data to the cells for the users(User Select Replica Allocation : USRA). We also analyze the access rate and the possibility which are closely related to the moving frequency of the mobile hosts and the numbers of the cells. As a result, We show that the 120% lower access cost and the 40%∼50% gains are achieved from the low mobility

  • PDF

A COMPARATIVE STUDY ON BLOCKCHAIN DATA MANAGEMENT SYSTEMS: BIGCHAINDB VS FALCONDB

  • Abrar Alotaibi;Sarah Alissa;Salahadin Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.128-134
    • /
    • 2023
  • The widespread usage of blockchain technology in cryptocurrencies has led to the adoption of the blockchain concept in data storage management systems for secure and effective data storage and management. Several innovative studies have proposed solutions that integrate blockchain with distributed databases. In this article, we review current blockchain databases, then focus on two well-known blockchain databases-BigchainDB and FalconDB-to illustrate their architecture and design aspects in more detail. BigchainDB is a distributed database that integrates blockchain properties to enhance immutability and decentralization as well as a high transaction rate, low latency, and accurate queries. Its architecture consists of three layers: the transaction layer, consensus layer, and data model layer. FalconDB, on the other hand, is a shared database that allows multiple clients to collaborate on the database securely and efficiently, even if they have limited resources. It has two layers: the authentication layer and the consensus layer, which are used with client requests and results. Finally, a comparison is made between the two blockchain databases, revealing that they share some characteristics such as immutability, low latency, permission, horizontal scalability, decentralization, and the same consensus protocol. However, they vary in terms of database type, concurrency mechanism, replication model, cost, and the usage of smart contracts.

Methods to Enhance Service Scalability Using Service Replication and Migration (서비스 복제 및 이주를 이용한 서비스 확장성 향상 기법)

  • Kim, Ji-Won;Lee, Jae-Yoo;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.503-517
    • /
    • 2010
  • Service-oriented computing, the effective paradigm for developing service applications by using reusable services, becomes popular. In service-oriented computing, service consumer has no responsibility for managing services, just invokes services what service providers are producing. On the other hand, service providers should manage any resources and data for service consumers can use the service anytime and anywhere. However, it is hard service providers manage the quality of the services because an unspecified number of service consumers. Therefore, service scalability for providing services with higher quality of services specified in a service level agreement becomes a potential problem in service-oriented computing. There have been many researches for scalability in network, database, and distributed computing area. But a research about a definition of service scalability and metrics of measuring service scalability is still not mature in service engineering area. In this paper, we construct a service network which connects multiple service nodes, and integrate all the resources to manage it. And we also present a service scalability framework for managing service scalability by using a mechanism of service migration or replication. In section 3, we, firstly, present the structure of the scalability management framework and basic functionalities. In section 4, we propose scalability enhancement mechanism which is needed to release functionality of the framework. In section 5, we design and implement the framework by using proposed mechanism. In section 6, we demonstrate the result of our case study which dynamically manages services in multi-nodes environment by applying our framework. Through the case study, we show the applicability of our scalability management framework and mechanism.