• Title/Summary/Keyword: Dirty Data

Search Result 47, Processing Time 0.029 seconds

A Concurrency Control Method for Data Broadcasting in Mobile Computing Environment (이동 컴퓨팅 환경에서 데이타 방송을 위한 동시성 제어 기법)

  • 윤혜숙;김영국
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.140-149
    • /
    • 2004
  • Data broadcast has received much attention as a very efficient method for disseminating data items in mobile environment with large number of mobile clients. In this approach, a database server periodically and continuously broadcasts data items through wireless channels and clients perform read-only transactions by accessing necessary data items from the air. While broadcasting, the server must also process update transactions on the database, which raises an obstacle for client's accessing consistent data. In this research, we propose a new algorithm SCDSC(Serialization Checking with DirtySet on Commit) which is an alternative for solving the concurrency control problem efficiently. The SCDSC is a kind of optimistic concurrency control in that a client checks the consistency of data using a DirtySet as a part of data broadcast when it commits its transaction. In each broadcast cycle, the server updates and disseminates the DirtySet with newly changed data items for last few cycles in the sliding window approach. We perform an analysis and a simulation study to evaluate the performance of our SCDSC algorithm in terms of data consistency and data currency.

Exterior egg quality as affected by enrichment resources layout in furnished laying-hen cages

  • Li, Xiang;Chen, Donghua;Meng, Fanyu;Su, Yingying;Wang, Lisha;Zhang, Runxiang;Li, Jianhong;Bao, Jun
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.30 no.10
    • /
    • pp.1495-1499
    • /
    • 2017
  • Objective: This study aimed to investigate the effects of enrichment resources (a perch, dustbath, and nest) layout in furnished laying-hen cages (FC) on exterior quality of eggs. Methods: One hundred and sixty-eight (168) Hy-Line Brown laying hens at 16 weeks of age were randomly distributed to four treatments: small furnished cages (SFC), medium furnished cages type I (MFC-I), medium furnished cages type II (MFC-II), and medium furnished cages type III (MFC-III). Each treatment had 4 replicates or cages with 6 hens for SFC (24 birds for each SFC) and 12 hen/cage for MFC-I, -II, and -III (48 birds for each MFC-I, -II and -III). Following a 2-week acclimation, data collection started at 18 weeks of age and continued till 52 weeks of age. Dirtiness of egg surface or cracked shell as indicators of the exterior egg quality were recorded each week. Results: The results showed that the proportion of cracked or dirty eggs was significantly affected by the FC type (p<0.01) in that the highest proportion of cracked or dirty eggs was found in MFC-I and the lowest proportion of dirty eggs in SFC. The results of this showed that furnished cage types affected both dirty eggs and cracked eggs (p<0.01). The results also indicated that not nest but dustbath lead to more dirty eggs. Only MFC-I had higher dirty eggs at nest than other FC (p<0.01). The results of dirty eggs in MFC-I and MFC-II compared with SFC and MFC-III seemed suggest that a low position of dustbath led to more dirty eggs. Conclusion: SFC design affected exterior egg quality and the low position of dustbath in FC resulted in higher proportion of dirty eggs.

A Technique for Accurate Detection of Container Attacks with eBPF and AdaBoost

  • Hyeonseok Shin;Minjung Jo;Hosang Yoo;Yongwon Lee;Byungchul Tak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.39-51
    • /
    • 2024
  • This paper proposes a novel approach to enhance the security of container-based systems by analyzing system calls to dynamically detect race conditions without modifying the kernel. Container escape attacks allow attackers to break out of a container's isolation and access other systems, utilizing vulnerabilities such as race conditions that can occur in parallel computing environments. To effectively detect and defend against such attacks, this study utilizes eBPF to observe system call patterns during attack attempts and employs a AdaBoost model to detect them. For this purpose, system calls invoked during the attacks such as Dirty COW and Dirty Cred from popular applications such as MongoDB, PostgreSQL, and Redis, were used as training data. The experimental results show that this method achieved a precision of 99.55%, a recall of 99.68%, and an F1-score of 99.62%, with the system overhead of 8%.

Improving Log-Structured File System Performance by Utilizing Non-Volatile Memory (비휘발성 메모리를 이용한 로그 구조 파일 시스템의 성능 향상)

  • Kang, Yang-Wook;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.537-541
    • /
    • 2008
  • Log-Structured File System(LFS) is a disk based file system that is optimized for improving the write performance. LFS gathers dirty data in memory as long as possible, and flushes all dirty data sequentially at once. In a real system, however, maintaining dirty data in memory should be flushed into a disk to meet file system consistency issues even if more memory is still available. This synchronizations increase the cleaner overhead of LFS and make LFS to write down more metadata into a disk. In this paper, by adapting Non-volatile RAM(NV-RAM) we modifies LFS and virtual memory subsystem to guarantee that LFS could gather enough dirty data in the memory and reduce small disk writes. By doing so, we improves the performance of LFS by around 2.5 times than the original LFS.

A Study on Improvement of Handling Dirty Bulk Cargo in Busan Port (부산항의 기피화물 취급 개선에 관한 연구)

  • Song, Gye-Eui
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.3
    • /
    • pp.114-129
    • /
    • 2010
  • Busan port's main function is handling container cargo compared to world major ports and the percentage of handling general cargo such as dirty bulk cargo is very low. In other words, although total cargo weight of Busan port that recorded the handling result of 13.29million TEU in 2008 reached 113.05million ton, total cargo weight of general cargo was 15.31million ton, so container cargo accounted for 88.1% of whole cargo weight. However, it is the time to create high added value by the increase of handling and marketing dirty bulk cargo. Originally, the dirty bulk cargo was not the avoided object from the first. Somehow, it is a very high added value cargo, and is surely essential strategic material to basic industries of nation. However, it becomes dirty bulk cargo as the companies are reluctant to handle it because of environmental problem, distinct characteristic in handling, uncertain break even point due to imbalance between supply and demand compared to container cargo. However, items that are classified as dirty bulk cargo now are certainly necessary strategic materials to national basic industries or national life. Besides it seems to be a high added value cargo here and now. Therefore, it is time that increasing of handling dirty bulk cargo by marketing it and the system for efficient handling such as constructing the exclusive use wharf in Busan port, modernizing of facilities and equipments, stable secure of place for holding and handling through development of distribution complex by item, efficient data processing and closer cooperation by setting up a SCM of related authorities are needful.

A Working-set Sensitive Page Replacement Policy for PCM-based Swap Systems

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.1
    • /
    • pp.7-14
    • /
    • 2017
  • Due to the recent advances in Phage-Change Memory (PCM) technologies, a new memory hierarchy of computer systems with PCM is expected to appear. In this paper, we present a new page replacement policy that adopts PCM as a high speed swap device. As PCM has limited write endurance, our goal is to minimize the amount of data written to PCM. To do so, we defer the eviction of dirty pages in proportion to their dirtiness. However, excessive preservation of dirty pages in memory may deteriorate the page fault rate, especially when the memory capacity is not enough to accommodate full working-set pages. Thus, our policy monitors the current working-set size of the system, and controls the deferring level of dirty pages not to degrade the system performances. Simulation experiments show that the proposed policy reduces the write traffic to PCM by 160% without performance degradations.

Developing dirty data cleansing service between SOA-based services (SOA 기반 서비스 사이의 오류 데이터 정제 서비스 개발)

  • Ji, Eun-Mi;Choi, Byoung-Ju;Lee, Jung-Won
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.829-840
    • /
    • 2007
  • Dirty Data Cleansing technique so far have aimed to integrate large amount of data from various sources and manage data quality resided in DB so that it enables to extract meaningful information. Prompt response to varying environment is required in order to persistently survive in rapidly changing business environment and the age of limitless competition. As system requirement is recently getting complexed, Service Oriented Architecture is proliferated for the purpose of integration and implementation of massive distributed system. Therefore, SOA necessarily needs Data Exchange among services through Data Cleansing Technique. In this paper, we executed quality management of XML data which is transmitted through events between services while they are integrated as a sole system. As a result, we developed Dirty Data Cleansing Service based on SOA as focusing on data cleansing between interactive services rather than cleansing based on detection of data error in DB already integrated.

A Data Quality Measuring Tool (데이타 품질 측정 도구)

  • 양자영;최병주
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.278-288
    • /
    • 2003
  • Quality of the software is affected by quality of data required for operating the actual software. Especially, it is important that assure the quality of data in a knowledge-engineering system that extracts the meaningful knowledge from stored data. In this paper, we developed DAQUM tool that can measure quality of data. This paper shows: 1) main contents for implement of DAQUM tool; 2) detection of dirty data via DAQUM tool through case study and measurement of data quality which is quantifiable from end-user's point of view. DAQUM tool will greatly contribute to improving quality of software product that processes mainly the data through control and measurement of data quality.

Developing the SOA-based Dirty Data Cleansing Service (SOA에서의 오류 데이터 정제를 위한 서비스 개발)

  • Ji, Eun-Mi;Choi, Byoung-Ju;Lee, Jung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.649-652
    • /
    • 2007
  • 최근 e-Business 어플리케이션을 통합하기 위한 개념으로 서비스 지향구조 (Service Oriented Architecture)에 기본 원리를 둔 분산 소프트웨어 통합 기술이 널리 확산되고 있다. 따라서 각 서비스간의 데이터 정제기법을 통한 신뢰성 있는 데이터 교환은 필수적 요소로 자리 잡고 있다. 본 논문에서는 시스템에 상호작용 시 교환되는 데이터의 오류를 탐지하고 정제하기 위한 서비스로 사용자의 데이터 제약조건을 결합 시키는 변환 과정, 오류를 탐지하는 탐지과정, 탐지된 오류를 정제하고, 정보를 보여주는 정제과정으로 이루어진 오류 데이터 정제 서비스(DDCS; Dirty Data Cleansing Service)를 구현하고, 이를 이용하여 SOA기반 ESB상에서 통합된 시스템들 간에 상호 작용하는 오류 데이터 정제를 보장하는 서비스를 개발한다.

  • PDF

An efficient caching scheme at replacing a dirty block for softwre RAID filte systems (소프트웨어 RAID 파일 시스템에서 오손 블록 교체시에 효율적인 캐슁 기법)

  • 김종훈;노삼혁;원유헌
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1599-1606
    • /
    • 1997
  • The software RAID file system is defined as the system which distributes data redundantly across an aray of disks attached to each workstations connected on a high-speed network. This provides high throughput as well as higher availability. In this paper, we present an efficient caching scheme for the software RAID filte system. The performance of this schmem is compared to two other schemes previously proposed for convnetional file systems and adapted for the software RAID file system. As in hardware RAID systems, small-writes to be the performance bottleneck in softwre RAID filte systems. To tackle this problem, we logically divide the cache into two levels. By keeping old data and parity val7ues in the second-level cache we were able to eliminate much of the extra disk reads and writes necessary for write-back of dirty blocks. Using track driven simulations we show that the proposed scheme improves performance for both the average response time and the average system busy time.

  • PDF