• Title/Summary/Keyword: Dirty Data

Search Result 47, Processing Time 0.021 seconds

Enhancing LRU Buffer Replacement Policy with Delayed Write of Not-cold-dirty-pages for Flash Memory (플래시 메모리를 위한 Not-cold-Page 쓰기지연을 통한 LRU 버퍼교체 정책 개선)

  • Jung Ho-Young;Park Sung-Min;Cha Jae-Hyuk;Kang Soo-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.634-641
    • /
    • 2006
  • Flash memory has many advantages like non-volatility and fast I/O speed, but it has also disadvantages such as not-in-place-update data and asymmetric read/write/erase speed. For the performance of flash memory storage, it is essential for the buffer replacement algorithms to reduce the number of write operations that also affects the number of erase operations. A new buffer replacement algorithm is proposed in this paper, that delays the writes of not-cold-dirty pages in the buffer cache of flash storage. We show that this algorithm effectively decreases the number of write operations and erase operations without much degradation of hit ratio. As a result overall performance of flash I/O speed is improved.

Design of an Asynchronous Data Cache with FIFO Buffer for Write Back Mode (Write Back 모드용 FIFO 버퍼 기능을 갖는 비동기식 데이터 캐시)

  • Park, Jong-Min;Kim, Seok-Man;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.72-79
    • /
    • 2010
  • In this paper, we propose the data cache architecture with a write buffer for a 32bit asynchronous embedded processor. The data cache consists of CAM and data memory. It accelerates data up lood cycle between the processor and the main memory that improves processor performance. The proposed data cache has 8 KB cache memory. The cache uses the 4-way set associative mapping with line size of 4 words (16 bytes) and pseudo LRU replacement algorithm for data replacement in the memory. Dirty register and write buffer is used for write policy of the cache. The designed data cache is synthesized to a gate level design using $0.13-{\mu}m$ process. Its average hit rate is 94%. And the system performance has been improved by 46.53%. The proposed data cache with write buffer is very suitable for a 32-bit asynchronous processor.

Establishing Data Quality Metric from Dirty Data (오류 데이터로부터의 데이터 품질 메트릭의 정립)

  • 김수경;최병주
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10a
    • /
    • pp.409-411
    • /
    • 2000
  • 소프트웨어 제품의 품질을 보증하는 일은 매우 중요하며, 국제 표준인 ISO/IEC9126은 소프트웨어 품질 특성 및 측적 메트릭 표준을 제공하고 있다. 이때 ISO/IEC 9126에서는 소프트웨어를 프로그램, 절차, 규칙 및 관련문서로 한정하고 있기 때문에 데이터의 품질에는 적용할 수 없다. 본 논문에서는 데이터 품질 평가 및 제어를 위하여 오류 데이터 형태를 분류하고, 이를 기반으로 데이터 품질 특성을 추출한다. 추출된 데이터 품질 특성을 측정하기 위해, 오류 데이터를 품질 속성으로 하는 데이터 품질 특성을 추출한다. 본 논문에서 제시하는 데이터 품질 메트릭은 지식 공학(knowledge engineering) 시스템이 최종 사용자에게 제공하는 데이터나 지식의 품질 측정 및 제어에 기준이 된다.

  • PDF

Extraction of Data Quality Characteristics from Dirty Data (데이터 오류에서 추출한 데이터 품질 특성)

  • 김수경;최병주
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.549-551
    • /
    • 2000
  • 소프트웨어 제품의 품질을 보증하는 일은 매우 중요하며, 국제표준인 ISO/IEC 9126은 소프트웨어 품질 및 특성 및 측정 메트릭 표준을 제공하고 있다. 이때 ISO/IEC 9126에서는 소프트웨어를 프로그램, 절차, 규칙 및 관련문서로 한정하고 있기 때문에 데이터의 품질에는 적용할 수 없다. 본 논문에서는 데이터 품질 평가 및 제어를 위하여 데이터 오류 형태를 분류하고, 이를 기반으로 데이트 품질 특성 및 부특성을 분류한다. 데이터 품질 특성 분류는 ISO/IEC 9126에 정의한 소프트웨어 품질 특성을 데이터 오류 형태에 대응시켜 추출한다. 본 논문에서 제시하는 데이트 품질특성 분류는 지식 공학(knowledge engineering)시스템이 최종 사용자에게 제공하는 데이터나 지식의 품질 측정 및 제어에 기준이 된다.

  • PDF

LDF-CLOCK: The Least-Dirty-First CLOCK Replacement Policy for PCM-based Swap Devices

  • Yoo, Seunghoon;Lee, Eunji;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.1
    • /
    • pp.68-76
    • /
    • 2015
  • Phase-change memory (PCM) is a promising technology that is anticipated to be used in the memory hierarchy of future computer systems. However, its access time is relatively slower than DRAM and it has limited endurance cycle. Due to this reason, PCM is being considered as a high-speed storage medium (like swap device) or long-latency memory. In this paper, we adopt PCM as a virtual memory swap device and present a new page replacement policy that considers the characteristics of PCM. Specifically, we aim to reduce the write traffic to PCM by considering the dirtiness of pages when making a replacement decision. The proposed replacement policy tracks the dirtiness of a page at the granularity of a sub-page and replaces the least dirty page among pages not recently used. Experimental results with various workloads show that the proposed policy reduces the amount of data written to PCM by 22.9% on average and up to 73.7% compared to CLOCK. It also extends the lifespan of PCM by 49.0% and reduces the energy consumption of PCM by 3.0% on average.

The Least-Dirty-First CLOCK Replacement Policy for Phase-Change Memory based Swap Devices (PCM 기반 스왑 장치를 위한 클럭 기반 최소 쓰기 우선 교체 정책)

  • Yoo, Seunghoon;Lee, Eunji;Bahn, Hyokyung
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1071-1077
    • /
    • 2015
  • In this paper, we adopt PCM (phase-change memory) as a virtual memory swap device and present a new page replacement policy that considers the characteristics of PCM. Specifically, we aim to reduce the write traffic to PCM by considering the dirtiness of pages when making a replacement decision. The proposed policy tracks the dirtiness of a page at the granularity of a sub-page and replaces the least dirty page among the pages not recently used. Experimental results show that the proposed policy reduces the amount of data written to PCM by 22.9% on average and up to 73.7% compared to CLOCK. It also extends the lifespan of PCM by 49.0% and reduces the energy consumption of PCM by 3.0% on average.

Data Reduction Method in Massive Data Sets

  • Namo, Gecynth Torre;Yun, Hong-Won
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.1
    • /
    • pp.35-40
    • /
    • 2009
  • Many researchers strive to research on ways on how to improve the performance of RFID system and many papers were written to solve one of the major drawbacks of potent technology related with data management. As RFID system captures billions of data, problems arising from dirty data and large volume of data causes uproar in the RFID community those researchers are finding ways on how to address this issue. Especially, effective data management is important to manage large volume of data. Data reduction techniques in attempts to address the issues on data are also presented in this paper. This paper introduces readers to a new data reduction algorithm that might be an alternative to reduce data in RFID Systems. A process on how to extract data from the reduced database is also presented. Performance study is conducted to analyze the new data reduction algorithm. Our performance analysis shows the utility and feasibility of our categorization reduction algorithms.

Automatic Algorithm for Cleaning Asset Data of Overhead Transmission Line (가공송전 전선 자산데이터의 정제 자동화 알고리즘 개발 연구)

  • Mun, Sung-Duk;Kim, Tae-Joon;Kim, Kang-Sik;Hwang, Jae-Sang
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.7 no.1
    • /
    • pp.73-77
    • /
    • 2021
  • As the big data analysis technologies has been developed worldwide, the importance of asset management for electric power facilities based data analysis is increasing. It is essential to secure quality of data that will determine the performance of the RISK evaluation algorithm for asset management. To improve reliability of asset management, asset data must be preprocessed. In particular, the process of cleaning dirty data is required, and it is also urgent to develop an algorithm to reduce time and improve accuracy for data treatment. In this paper, the result of the development of an automatic cleaning algorithm specialized in overhead transmission asset data is presented. A data cleaning algorithm was developed to enable data clean by analyzing quality and overall pattern of raw data.

Automatic Cleaning Algorithm of Asset Data for Transmission Cable (지중 송전케이블 자산데이터의 자동 정제 알고리즘 개발연구)

  • Hwang, Jae-Sang;Mun, Sung-Duk;Kim, Tae-Joon;Kim, Kang-Sik
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.7 no.1
    • /
    • pp.79-84
    • /
    • 2021
  • The fundamental element to be kept for big data analysis, artificial intelligence technologies and asset management system is a data quality, which could directly affect the entire system reliability. For this reason, the momentum of data cleaning works is recently increased and data cleaning methods have been investigating around the world. In the field of electric power, however, asset data cleaning methods have not been fully determined therefore, automatic cleaning algorithm of asset data for transmission cables has been studied in this paper. Cleaning algorithm is composed of missing data treatment and outlier data one. Rule-based and expert opinion based cleaning methods are converged and utilized for these dirty data.

The Taxonomy of Dirty Data for MPEG-2 TS (MPEG-2 표준을 위한 오류 데이터 분류)

  • 곽태희;최병주
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.691-693
    • /
    • 2001
  • DASE(Digital TV Application Software Environment)는 데이터 방송을 위한 국제 표준으로 MPEG-2 TS(Moving Picture Experts Group-2 Transport Stream) 형식의 데이터를 처리한다. 소스코드 대신 입력 데이터 명세 정보만을 공개하는 특성상 DASE 시스템의 오류를 테스트하기 위해서는 테스트 데이터에 오류를 삽입하는 방법이 적합하고 이를 위해 MPEG-2 표준을 위한 오류 항목을 개발이 요구된다. 본 논문에서는 관계형 데이터 베이스를 위한 데이터 분류인 Kim’s et al 분류를 근거로 하여 MPEG-2 표준을 위한 오류 항목을 개발하였다. 이는 DASE 시스템의 오류 삽입 테스트 기법에 유용하게 사용될 수 있을 것이다.

  • PDF