• Title/Summary/Keyword: 데이터 클리닝

Search Result 20, Processing Time 0.031 seconds

Design of SQL Based RFID Cleaning Module (SQL 기반 RFID 클리닝 모듈 설계)

  • Yun, Hee-Sung;Kim, Dong-Kyun;Lee, Sang-Jung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.1088-1091
    • /
    • 2007
  • RFID 기술의 상용화를 저해하는 한 요소인 태그 인식률 문제를 보완하기 위한 클리닝 모듈을 설계한다. 클리닝 모듈은 RFID 리더로부터 원본 데이터를 클리닝 모듈을 통해 애플리케이션에서 사용 가능한 수준의 정보로 가공한다. 클리닝 모듈의 성능을 확인하기 위해 태그의 논리적인 구역을 정하고 태그의 이동을 추적한다. 실험결과를 통해 클리닝 모듈 적용 전후를 비교하여 모듈의 성능을 평가한다.

  • PDF

Cleaning Noises from Time Series Data with Memory Effects

  • Cho, Jae-Han;Lee, Lee-Sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.37-45
    • /
    • 2020
  • The development process of deep learning is an iterative task that requires a lot of manual work. Among the steps in the development process, pre-processing of learning data is a very costly task, and is a step that significantly affects the learning results. In the early days of AI's algorithm research, learning data in the form of public DB provided mainly by data scientists were used. The learning data collected in the real environment is mostly the operational data of the sensors and inevitably contains various noises. Accordingly, various data cleaning frameworks and methods for removing noises have been studied. In this paper, we proposed a method for detecting and removing noises from time-series data, such as sensor data, that can occur in the IoT environment. In this method, the linear regression method is used so that the system repeatedly finds noises and provides data that can replace them to clean the learning data. In order to verify the effectiveness of the proposed method, a simulation method was proposed, and a method of determining factors for obtaining optimal cleaning results was proposed.

An Efficient Cleaning Scheme for File Defragmentation on Log-Structured File System (로그 구조 파일 시스템의 파일 단편화 해소를 위한 클리닝 기법)

  • Park, Jonggyu;Kang, Dong Hyun;Seo, Euiseong;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.627-635
    • /
    • 2016
  • When many processes issue write operations alternately on Log-structured File System (LFS), the created files can be fragmented on the file system layer although LFS sequentially allocates new blocks of each process. Unfortunately, this file fragmentation degrades read performance because it increases the number of block I/Os. Additionally, read-ahead operations which increase the number of data to request at a time exacerbates the performance degradation. In this paper, we suggest a new cleaning method on LFS that minimizes file fragmentation. During a cleaning process of LFS, our method sorts valid data blocks by inode numbers before copying the valid blocks to a new segment. This sorting re-locates fragmented blocks contiguously. Our cleaning method experimentally eliminates 60% of file fragmentation as compared to file fragmentation before cleaning. Consequently, our cleaning method improves sequential read throughput by 21% when read-ahead is applied.

A Study of Purity-based Page Allocation Scheme for Flash Memory File Systems (플래시 메모리 파일 시스템을 위한 순수도 기반 페이지 할당 기법에 대한 연구)

  • Baek, Seung-Jae;Choi, Jong-Moo
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.387-398
    • /
    • 2006
  • In this paper, we propose a new page allocation scheme for flash memory file system. The proposed scheme allocates pages by exploiting the concept of Purity, which is defined as the fraction of blocks where valid Pages and invalid Pages are coexisted. The Pity determines the cost of block cleaning, that is, the portion of pages to be copied and blocks to be erased for block cleaning. To enhance the purity, the scheme classifies hot-modified data and cold-modified data and allocates them into different blocks. The hot/cold classification is based on both static properties such as attribute of data and dynamic properties such as the frequency of modifications. We have implemented the proposed scheme in YAFFS and evaluated its performance on the embedded board equipped with 400MHz XScale CPU, 64MB SDRAM, and 64MB NAND flash memory. Performance measurements have shown that the proposed scheme can reduce block cleaning time by up to 15.4 seconds with an average of 7.8 seconds compared to the typical YAFFS. Also, the enhancement becomes bigger as the utilization of flash memory increases.

Data Quality Management: Operators and a Matching Algorithm with a CRM Example (데이터 품질 관리 : CRM을 사례로 연산자와 매칭기법 중심)

  • 심준호
    • The Journal of Society for e-Business Studies
    • /
    • v.8 no.3
    • /
    • pp.117-130
    • /
    • 2003
  • It is not unusual to observe that there Is a great amount of redundant or inconsistent data even within an e-business system such as CRM(Customer Relationship Management) system. This problem becomes aggravate when we construct a system of which information are gathered from different sources. Data quality management is indeed needed to avoid any possible redundant or inconsistent data in such information system. A data quality process, in general, consists of three phases: data cleaning (scrubbing), matching, and integration phase. In this paper, we introduce and categorize data quality operators for each phase. Then, we describe our distance function used in the matching phase, and present a matching algorithm PRIMAL (a PRactical Matching Algorithm). And finally, we present a related work and future research.

  • PDF

A Segment Space Recycling Scheme for Optimizing Write Performance of LFS (LFS의 쓰기 성능 최적화를 위한 세그먼트 공간 재활용 기법)

  • Oh, Yong-Seok;Kim, Eun-Sam;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.963-967
    • /
    • 2009
  • The Log-structured File System (LFS) collects all modified data into a memory buffer and writes them sequentially to a segment on disk. Therefore, it has the potential to utilize the maximum bandwidth of storage devices where sequential writes are much faster than random writes. However, as disk space is finite, LFS has to conduct cleaning to produce free segments. This cleaning operation is the main reason LFS performance deteriorates when file system utilization is high. To overcome painful cleaning and reduced performance of LFS, we propose the segment space recycling (SSR) scheme that directly writes modified data to invalid areas of the segments and describe the classification method of data and segment to consider locality of reference for optimizing SSR scheme. We implement U-LFS, which employs our segment space recycling scheme in LFS, and experimental results show that SSR scheme increases performance of WOLF by up to 1.9 times in HDD and 1.6 times in SSD when file system utilization is high.

Design and Implementation of Cleaning Policy for Flash Memory (플래쉬 메모리를 위한 클리닝 정책 설계 및 구현)

  • 임대영;윤기철;김길용
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.217-219
    • /
    • 2001
  • 플래쉬 메모리는 데이터 저장 및 변경이 가능한 비휘발성 메모리로 가벼운 무게, 낮은 전력 소모, 충격에 대한 저항성과 빠른 데이터 처리 능력 때문에 이동형 컴퓨터 시스템에서 사용하기에 적당하다. 그러나 플래쉬 메모리는 덮어쓰기(update-in-place)가 불가능하고 각 메모리 셀에 대해 초기화 작업(erasing operation)의 수가 제한되어 있다. 이러한 단점들을 고려하여 세그먼트의 데이터 중 유효 데이터의 비율과 hot 데이터(가까운 시간 안에 update가 될 것이라는 예상되는 data)의 수, 세그멘트가 초기화되었던(easing) 횟수 등을 고려한 새로운 초기화 기법(cleaning policy)을 제안하고자 한다.

Development of PTSD Web-based learning (소방공무원을 위한 외상후스트레스장애(PTSD) 웹기반교육 개발)

  • Kim, Jee-Hee
    • Proceedings of the KAIS Fall Conference
    • /
    • 2009.12a
    • /
    • pp.212-213
    • /
    • 2009
  • 본 연구의 목적은 소방공무원이 현장에서 겪는 충격 스트레스로 발생하는 외상 후 스트레스장애 (posttraumatic stress disorder, PTSD) 분석을 통해 위기상황 스트레스 해소 교육 프로그램 기초 자료를 제시하고자 하는 데 있다. 연구 목적을 달성하기 위하여 독립변인(업무부담감, 스트레스, 스트레스 대응) 3개, 매개변인(현장충격 스트레스) 1개, 종속변인(신체적 증상)으로 구성하여 영향력을 파악하였다. 본 연구 대상자는 전국 970명 소방공무원으로 2007년 3월부터 12월까지 구조화된 설문지를 이용하여 조사하였다. 자료분석은 SPSS 14.0과 구조방정식 모형인 AMOS 7.0 통계패키지를 사용하였으며, 정확한 코딩데이터의 입력확인을 위해 데이터클리닝(data cleaning) 작업을 실시하였다. 가설검증을 위하여 구조방정식 모형분석을 실시한 결과, 업무부담감, 스트레스, 현장충격 스트레스가 낮고, 스트레스 대응이 높을 때 신체적 증상이 낮아지는 것으로 나타났다. 서울소방학교와 공동으로 총 10개 차시로 웹기반 교육을 구성하였으며, 2010년 1월부터 소방공무원 전체를 대상으로 웹기반 교육을 실시하고자 한다.

  • PDF

UPnP Services for RFID Context-Aware System (RFID 상황인식 시스템을 위한 UPnP 서비스)

  • Kim, Dong-Kyun;Jeon, Byung-Chan;Lee, Sang-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.11
    • /
    • pp.2005-2014
    • /
    • 2008
  • In this paper, it is presented to utilize UPnP as RFID service discovery and control mechanism for context-aware services. Using UPnP, it is possible to achieve easy deployment of context-aware services and to provide zero-configuration for RFID services. In addition, SQL based cleaning module which raises detection rates is developed since context-aware applications heavily rely on streams of data gathered from RFID tags. Using the cleaning technique, detection rates are improved from 60-80% to 98% or more In order to verify RFID context aware service based on the UPnP, sample context-aware scenario for physical distribution services is implemented on UPnP over RFID system. The impacts of UPnP messages (or service advertisements on network congestion and SQL cleaning module are experimented and analyzed, and the result show the good correctness and validity of the proposed system.

Efficient Approximate String Searches with Inverted Lists through Search Range Reduction (효율적인 유사문자열 검색을 위한 역리스트 탐색 기법)

  • Lee, Eun-Seok;Kim, Jong-Ik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.1310-1313
    • /
    • 2011
  • 유사문자열 검색이란 문자열 집합에서 주어진 문자열과 유사한 문자열들을 검색하는 것으로 정보검색, 데이터 클리닝 등의 분야에서 활용되고 있다. 효율적인 유사문자열 검색을 위해 사전에 문자열 집합에 대한 역리스트를 구성하고 문자열이 주어졌을 때, 주어진 문자열에 관련된 역리스트를 병합하여 유사도 기준을 만족하는 문자열을 찾는다. 이때 비용을 줄이기 위해 일부의 역리스트만 병합하고 나머지 역리스트에 대해서는 이진탐색을 하는 방법이 있다. 본 논문에서는 역리스트를 이진탐색할 때, 불필요한 탐색구간을 제거하여 역리스트 탐색 비용을 줄이는 방법을 제안한다.