• Title/Summary/Keyword: Data Consistency

Search Result 1,505, Processing Time 0.031 seconds

DJFS: Providing Highly Reliable and High-Performance File System with Small-Sized NVRAM

  • Kim, Junghoon;Lee, Minho;Song, Yongju;Eom, Young Ik
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.820-831
    • /
    • 2017
  • File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under-utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force-unit-access (FUA) commands. In this paper, we present DJFS (Delta-Journaling File System) that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small-sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC-C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

Structural Relationship between Self-Leadership and Grit and Performance of Taekwondo Players: Focusing on the Multiple Mediations of Grit

  • Kim, Moo-Young
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.194-200
    • /
    • 2021
  • The purpose of this study is to analyze the structural relationship among self-leadership, grit, and performance.Specifically, self-leadership was selected as an independent variable, two dimensions of grit were selected as mediator variables, and performance was selected as a dependent variable. This structural equation model is based on previous studies.The subjects of this study were middle and high school Taekwondo players, and the survey was collected using the online survey system KSDC (Korean Social-Science Data Center). The sampling method was a non-probability sampling method, convenience sampling method.A total of 367 copies were collected through this process, and 355 copies were used as the final valid samples after excluding the insincere data.Data processing was done with SPSS 23 for frequency analysis, exploratory factor analysis and reliability analysis. Also, AMOS 21 was used for confirmatory factor analysis and structural equation model analysis. The results of the analysis are as follows: First, self-leadership had a positive effect on bothi interest consistency grit and effort persistence grit. Second, it was found that both interest consistency grit and effort persistence grit have a positive effect on performance.Third,self-leadership had a positive effect on performance.

Efficient Schemes for Cache Consistency Maintenance in a Mobile Database System (이동 데이터베이스 시스템에서 효율적인 캐쉬 일관성 유지 기법)

  • Lim, Sang-Min;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.221-232
    • /
    • 2001
  • Due to rapid advance of wireless communication technology, demand on data services in mobile environment is gradually increasing. Caching at a mobile client could reduce bandwidth consumption and query response time, and yet a mobile client must maintain cache consistency. It could be efficient for the server to broadcast a periodic cache invalidation report for cache consistency in a cell. In case that long period of disconnection prevents a mobile client from checking validity of its cache based solely on the invalidation report received, the mobile client could request the server to check cache validity. In doing so, some schemes may be more efficient than others depending on the number of available channels and the mobile clients involved. In this paper, we propose new cache consistency schemes, effects, efficient especially (1) when channel capacity is enough to deal with the mobile clients involved or (2) when that is not the case, and evaluate their performance.

  • PDF

Consistency check algorithm for validation and re-diagnosis to improve the accuracy of abnormality diagnosis in nuclear power plants

  • Kim, Geunhee;Kim, Jae Min;Shin, Ji Hyeon;Lee, Seung Jun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3620-3630
    • /
    • 2022
  • The diagnosis of abnormalities in a nuclear power plant is essential to maintain power plant safety. When an abnormal event occurs, the operator diagnoses the event and selects the appropriate abnormal operating procedures and sub-procedures to implement the necessary measures. To support this, abnormality diagnosis systems using data-driven methods such as artificial neural networks and convolutional neural networks have been developed. However, data-driven models cannot always guarantee an accurate diagnosis because they cannot simulate all possible abnormal events. Therefore, abnormality diagnosis systems should be able to detect their own potential misdiagnosis. This paper proposes a rulebased diagnostic validation algorithm using a previously developed two-stage diagnosis model in abnormal situations. We analyzed the diagnostic results of the sub-procedure stage when the first diagnostic results were inaccurate and derived a rule to filter the inconsistent sub-procedure diagnostic results, which may be inaccurate diagnoses. In a case study, two abnormality diagnosis models were built using gated recurrent units and long short-term memory cells, and consistency checks on the diagnostic results from both models were performed to detect any inconsistencies. Based on this, a re-diagnosis was performed to select the label of the second-best value in the first diagnosis, after which the diagnosis accuracy increased. That is, the model proposed in this study made it possible to detect diagnostic failures by the developed consistency check of the sub-procedure diagnostic results. The consistency check process has the advantage that the operator can review the results and increase the diagnosis success rate by performing additional re-diagnoses. The developed model is expected to have increased applicability as an operator support system in terms of selecting the appropriate AOPs and sub-procedures with re-diagnosis, thereby further increasing abnormal event diagnostic accuracy.

3D Line Segment Extraction Based on Line Fitting of Elevation Data

  • Woo, Dong-Min
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.181-185
    • /
    • 2009
  • In this paper, we are concerned with a 3D line segment extraction method by area-based stereo matching technique. The main idea is based on line fitting of elevation data on 2D line coordinates of ortho-image. Elevation data and ortho-image can be obtained by well-known area-based stereo matching technique. In order to use elevation in line fitting, the elevation itself should be reliable. To measure the reliability of elevation, in this paper, we employ the concept of self-consistency. We test the effectiveness of the proposed method with a quantitative accuracy analysis using synthetic images generated from Avenches data set of Ascona aerial images. Experimental results indicate that our method generates 3D line segments almost 7.5 times more accurate than raw elevations obtained by area-based method.

  • PDF

A Reconstruction Method of HQC structure for Improving Availability of Data in Distributed Environment (분산환경에서의 데이터 가용성 향상을 위한 HQC 구조의 재고성 방법)

  • 유현창;조동영;손진곤;황종선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.1-9
    • /
    • 1994
  • In distributed environments, data replication increases availability and decreases communication cost. However, it is difficult to maintain consistency and availability of data if site failure occurs. When we use the conventional hierarchical quorum consensus(HQC) method in order to maintain the consistency of data, occurrence of site failures makes it harder to perform the operation on replicated data because of insufficient votes. The objective of this paper is to improve the possibility of retaining necessary votes by reconstructing the HQC structure, when the site failure occurs. Furthermore, we compare the modified HQC method with the conventional HQC and QC methods in terms of improvement of availability.

  • PDF

Verification of the Suitability of Fine Dust and Air Quality Management Systems Based on Artificial Intelligence Evaluation Models

  • Heungsup Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.165-170
    • /
    • 2024
  • This study aims to verify the accuracy of the air quality management system in Yangju City using an artificial intelligence (AI) evaluation model. The consistency and reliability of fine dust data were assessed by comparing public data from the Ministry of Environment with data from Yangju City's air quality management system. To this end, we analyzed the completeness, uniqueness, validity, consistency, accuracy, and integrity of the data. Exploratory statistical analysis was employed to compare data consistency. The results of the AI-based data quality index evaluation revealed no statistically significant differences between the two datasets. Among AI-based algorithms, the random forest model demonstrated the highest predictive accuracy, with its performance evaluated through ROC curves and AUC. Notably, the random forest model was identified as a valuable tool for optimizing the air quality management system. This study confirms that the reliability and suitability of fine dust data can be effectively assessed using AI-based model performance evaluation, contributing to the advancement of air quality management strategies.

Data Interoperability Framework based on XMDR Data Hub using Proxy DataBase (XMDR 데이터 허브 기반의 Proxy 데이터베이스를 이용한 데이터 상호운용 프레임워크)

  • Moon, Seok-Jae;Jung, Gye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.8
    • /
    • pp.1463-1472
    • /
    • 2008
  • We propose Framework that (should) have data interoperability between Legacy DataBases using Proxy DataBases based on XMDR(eXtended Meta-Data Registry) Data Hub in this papers. It may occur some problems among data structure, semantics and other heterogeneous problems between interoperability of legacy DB on cooperation environment. Also, It is hard to keep consistency of Data that changes on realtime, regardless of data variety and type. In this paper, Using XMDR data hub, we solve the problem that was occurred by data integration and interoperability between legacy DB. We suggest the framework which are compatible with any class and type of interoperability-data and offer accurate information with consistency in real-time using proxy database.

Performance Evaluation of Deferrd Locking for Maintaining Transactional Cache Consistency (트랜잭션 캐쉬 일관성을 유지하기 위한 지연 로킹 기법의 성능 평가)

  • Kwon, Hyeok-Min
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2310-2326
    • /
    • 2000
  • Client-server DBMS based on a data-shipping model can exploit e1ient resources effectively by allowing inter-transaction caching. However, inter-transaction caching raises the need of transactional cache consistency maintenancetTCCM protocol. since each client is able to cache a portion of the database dynamically. Deferred locking(DL) is a new detection-based TCCM scheme designed on the basis of a primary copy locking algorithm. In DL, a number of lock ,ujuests and a data shipping request are combined into a single message packet to minimize the communication overhead required for consistency checking. Lsing a simulation model. the performance of the prolxlsed scheme is compared with those of two representative detection based schemes, the adaptive optimistic concurrency control and the caching two-phase locking. The performance results indicate that DL improves the overall system throughput with a reasonable transaction abort ratio over other detection - based schemes.

  • PDF

Version Management Method for Consistency in the Grid Database (그리드 데이터베이스에서 일관성 유지를 위한 버전 관리 기법)

  • Shin, Soong-Sun;Jang, Yong-Il;Chung, Weon-Il;Lee, Dong-Wook;Eo, Sang-Hun;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.928-939
    • /
    • 2008
  • The grid database management system is used for large data processing, high availability and data integration by grid environment. It has a replica for processing performance and high availability in each node. The grid database has a problem of inconsistency, when the update is occurred with a coincidently frequent. To solve this problem, in this paper proposed a version management method for consistency in the grid database. Proposed version manager manages a version of each replica. The version manager keeps a consistency of update operation when is occurred at each node by using a pending queue and waiting queue. Also the node keeps a consistency using a priority queue. So, proposed method has stable and fast update propagation. The proposed method shows stable and faster update propagation than traditional method by performance evaluation.

  • PDF