• Title/Summary/Keyword: Recovery Log

Search Result 123, Processing Time 0.02 seconds

Adaptive Link Recovery Period Determination Algorithm for Structured Peer-to-peer Networks (구조화된 Peer-to-Peer 네트워크를 위한 적응적 링크 복구 주기 결정 알고리듬)

  • Kim, Seok-Hyun;Kim, Tae-Eun
    • Journal of Digital Contents Society
    • /
    • v.12 no.1
    • /
    • pp.133-139
    • /
    • 2011
  • Structured P2P (peer-to-peer) networks have received much attention in research communities and the industry. The data stored in structured P2P networks can be located in a log-scale time without using central severs. The link-structure of structured P2P networks should be maintained for keeping log-scale search performance of it. When nodes join or leave structured P2P networks frequently, some links become unavailable and search performance is degraded by these links. To sustain search performance of structured P2P networks, periodic link recovery scheme is generally used. However, when the link recovery period is short or long compared with node join and leave rates, it is possible that sufficient number of links are not restored or excessive messages are used after the link-structure is restored. We propose the adaptive link recovery determination algorithm to maintain the link-structure of structured P2P networks when the rates of node joining and leaving are changed dynamically. The simulation results show that the proposed algorithm can maintain similar QoS under various node leaving rates.

A Recovery Scheme of Single Node Failure using Version Caching in Database Sharing Systems (데이타베이스 공유 시스템에서 버전 캐싱을 이용한 단일 노드 고장 회복 기법)

  • 조행래;정용석;이상호
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.409-421
    • /
    • 2004
  • A database sharing system (DSS) couples a number of computing nodes for high performance transaction processing, and each node in DSS shares database at the disk level. In case of node failures in DSS, database recovery algorithms are required to recover the database in a consistent state. A database recovery process in DSS takes rather longer time compared with single database systems, since it should include merging of discrete log records in several nodes and perform REDO tasks using the merged lo9 records. In this paper, we propose a two version caching (2VC) algorithm that improves the cache fusion algorithm introduced in Oracle 9i Real Application Cluster (ORAC). The 2VC algorithm can achieve faster database recovery by eliminating the use of merged log records in case of single node failure. Furthermore, it can improve the performance of normal transaction processing by reducing the amount of unnecessary disk force overhead that occurs in ORAC.

Efficient Algorithms for Causal Message Logging and Revoery (인과적 메시지 로그 및 복구를 위한 효율적인 알고리즘)

  • Lee, Byeong-Ju;Park, Tae-Sun;Yeom, Heon-Yeong;Jo, Yu-Geun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.7
    • /
    • pp.767-777
    • /
    • 1999
  • 인과적 메시지 로깅 기법은 정상프로세스를 역전(roll-back)시키거나 메시지의 저장을 위해 프로세스의 수행을 중단시키지 않는 장점을 지니고 있지만, 메시지의 크기가 지나치게 커진다는 단점을 지니고 있다. 본 논문에서는 인과적 메시지 로깅 기법의 이러한 문제점을 해결하기 위하여 로그 상속의 개념을 정의하고 로그 연혁을 이용하여 로그 비용, 특히 로그 크기 면에서 효율적인 로깅 기법을 제안한다. 또한 이 로깅 알고리즘을 이용하여 복구시 메시지의 수와 크기를 줄여 복구시간을 줄이는 효율적인 복구 알고리즘을 제안하고, 제안한 알고리즘이 메시지 로그 크기 면에서 효율적임을 증명한다. 또 제안한 알고리즘의 성능을 검증하기 위하여 두 가지 종류의 모의 실험을 수행하여 기존의 로깅 프로토콜과 메시지 크기 면에서의 성능을 비교한 결과를 제시하였다.Abstract Causal message logging has many good properties such as nonblocking message logging and no rollback propagation. However, it requires a large amount of information to be piggybacked on each message, which may incur severe performance degradation. This paper presents an efficient causal logging algorithm based on the new message log structure, LogOn, which represents the causal inter-process dependency relation with much smaller overhead compared to the existing algorithms. The proposed algorithm is efficient in the sense that it entails no additional information other than LogOn to be carried in each message, while other existing algorithms require extra information other than the message logs. This paper also presents an efficient recovery algorithm to solve the problem of a large amount of data exchanges during the recovery. To verify the performance of our algorithm, we give an analysis of the algorithm and perform two simulations and compare the log size with other causal logging protocols.

Comparison of Remaining Data According to Deletion Events on Microsoft SQL Server (Microsoft SQL Server 삭제 이벤트의 데이터 잔존 비교)

  • Shin, Jiho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.2
    • /
    • pp.223-232
    • /
    • 2017
  • Previous research on data recovery in Microsoft SQL Server has focused on restoring data based on in the transaction log that might have deleted records exist. However, there was a limit that was not applicable if the related transaction log did not exist or the physical database file was not connected to Server. Since the suspect in the crime scene may delete the data records using a different deletion statements besides "delete", we need to check the remaining data and a recovery possibility of the deleted record. In this paper, we examined the changes "Page Allocation information" of the table, "Unallocation deleted data", "Row Offset Array" in the page according to "delete", "truncate" and "drop" events. Finally it confirmed the possibility of data recovery and availability of management tools in Microsoft SQL Server digital forensic investigation.

Data Consistency-Control Scheme Using a Rollback-Recovery Mechanism for Storage Class Memory (스토리지 클래스 메모리를 위한 롤백-복구 방식의 데이터 일관성 유지 기법)

  • Lee, Hyun Ku;Kim, Junghoon;Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.7-14
    • /
    • 2015
  • Storage Class Memory(SCM) has been considered as a next-generation storage device because it has positive advantages to be used both as a memory and storage. However, there are significant problems of data consistency in recently proposed file systems for SCM such as insufficient data consistency or excessive data consistency-control overhead. This paper proposes a novel data consistency-control scheme, which changes the write mode for log data depending on the modified data ratio in a block, using a rollback-recovery scheme instead of the Write Ahead Logging (WAL) scheme. The proposed scheme reduces the log data size and the synchronization cost for data consistency. In order to evaluate the proposed scheme, we implemented our scheme on a Linux 3.10.2-based system and measured its performance. The experimental results show that our scheme enhances the write throughput by 9 times on average when compared to the legacy data consistency control scheme.

Metadata Log Management for Full Stripe Parity in Flash Storage Systems (플래시 저장 시스템의 Full Stripe Parity를 위한 메타데이터 로그 관리 방법)

  • Lim, Seung-Ho
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.11
    • /
    • pp.17-26
    • /
    • 2019
  • RAID-5 technology is one of the choice for flash storage device to enhance its reliability. However, RAID-5 has inherent parity update overhead, especially, parity overhead for partial stripe write is one of the crucial issues for flash-based RAID-5 technologies. In this paper, we design efficient parity log architecture for RAID-5 to eliminate runtime partial parity overhead. During runtime, partial parity is retained in buffer memory until full stripe write completed, and the parity is written with full strip write. In addition, parity log is maintained in memory until whole the stripe group is used for data write. With this parity log, partial parity can be recovered from the power loss. In the experiments, the parity log method can eliminate partial parity writes overhead with a little parity log writes. Hence it can reduce write amplification at the same reliability.

광대역 고감도 DLVA 개발

  • 이두훈;김상진;김재연;조현룡;이정문;김상기
    • The Proceeding of the Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.11 no.4
    • /
    • pp.39-52
    • /
    • 2000
  • A design of 2 stage S-DLVA(successive detector log video amplifier) was studied to detect wide dynamic radar pulse ranging from -70 ㏈m to 0㏈m. A basic design idea was focused on the linear detection in logarithmic scale of wide dynamic range radar pulses from nosie-like weak power of -70 ㏈m to relatively high power 0 ㏈m. It is highly formidable, since it requires high speed detection less than 10 nsec over the operating frequency ranges from 6 to 18 ㎓. A limiter diode, a tunnel diode and an L17-C were used as a protecting device, a detector diode and a log video amplifier in companion as a single stage detector to give voltage output proportional to the input power of about 35 ㏈ dynamic range. A protype of 2-stage DLVA having one more single stage detector was fabricated with a 32 ㏈ low noise amplifier and a 3 ㏈ hybrid coupler to provide total 70 ㏈ dynamic range detection. The logging characteristics were measured to have log slope of 25m.V/㏈ against 70 ㏈ logging range from -55 ㏈m to +15 ㏈m, the log linearity of within +/- 1.5 ㏈, and tangential sensitivity was at -63 ㏈m. The pulse dynamics of rise time and recovery time were measured as 50 nsec and 1.2 $\mu$sec, respectively. The reason might be due to the parasitic capacitances of packaged limiter, tunnel diode, and L17-C.

  • PDF

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

An Efficient Recovery System for Spatial Main Memory DBMS (공간 메인 메모리 DBMS를 위한 효율적인 회복 시스템)

  • Kim, Joung-Joon;Ju, Sung-Wan;Kang, Hong-Koo;Hong, Dong-Sook;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.8 no.3
    • /
    • pp.1-14
    • /
    • 2006
  • Recently, to efficiently support the real-time requirements of LBS and Telematics services, interest in the spatial main memory DBMS is rising. In the spatial main memory DBMS, because all spatial data can be lost when the system failure happens, the recovery system is very important for the stability of the database. Especially, disk I/O in executing the log and the checkpoint becomes the bottleneck of letting down the total system performance. Therefore, it is urgently necessary to research about the recovery system to reduce disk I/O in the spatial main memory DBMS. In this paper, we study an efficient recovery system for the spatial main memory DBMS. First, the pre-commit log method is used for the decrement of disk I/O and the improvement of transaction concurrency. In addition, we propose the fuzzy-shadow checkpoint method for the recovery system of the spatial main memory DBMS. This method can solve the problem of duplicated disk I/O on the same page of the existing fuzzy-pingpong checkpoint method for the improvement of the whole system performance. Finally, we also report the experimental results confirming the benefit of the proposed recovery system.

  • PDF

Injury and Recovery of Pathogenic Bacteria Isolated from Seafoods - Changes in the Viability of Staphylococcus aureus and Listeria monocytogenes in Some Fish Homogenates during Cold Storage - (해산물에서 분리된 식중독세균의 손상 및 회복 -생선 homogenate에서 Staphylococcus aureus와 Listeria monocytogenes의 저온저장중 세균수 변화 -)

  • 박찬성
    • Korean journal of food and cookery science
    • /
    • v.11 no.3
    • /
    • pp.261-266
    • /
    • 1995
  • The survival and growth of Staphylococcus aureus and Listeria monocytogenes in fish homogenates (flounder, shrimp and oyster homogenate) and tryptic soy broth (TSB) were tested during storage at simulated ambient (35$^{\circ}C$), refrigerated (5$^{\circ}C$) and frozen (-20$^{\circ}C$) temperature. A similar growth pattern of S. aureus at 35$^{\circ}C$ was observed in fish homogenates and TSB. Survival of S. aureus decreased at refrigerated or frozen temperature and that was greater at -20$^{\circ}C$ (0.3-1.2 log reduction/6 weeks) than at 5$^{\circ}C$ (1-1.6 log reduction/3 weeks). Viable cells of L. monocytogenes increased rapidly at 35$^{\circ}C$ in flounder homogenate, shrimp homogenate and TSB but after a prolonged lag period in oyster homogenate. During 3 weeks of storage at 5$^{\circ}C$, the levels of L. monocytogenes increased 3.8-5.0 log cycles in flounder homogenate, shrimp homogenate and TSB whereas levels increased 2.2 log cycles in oyster homogenate. Viable cells of L. monocytogenes during 6 weeks of frozen storage decreased 1.5-1.8 log cycles in flounder homogenate, shrimp homogenate and TSB while decreased 2.8 log cycles in oyster homogenate.

  • PDF