• Title/Summary/Keyword: File Time

Search Result 1,028, Processing Time 0.028 seconds

A File Clustering Algorithm for Wear-leveling (마모도 평준화를 위한 File Clustering 알고리즘)

  • Lee, Taehwa;Cha, Jaehyuk
    • Journal of Digital Contents Society
    • /
    • v.14 no.1
    • /
    • pp.51-57
    • /
    • 2013
  • Storage device based on Flash Memory have many attractive features such as high performance, low power consumption, shock resistance, and low weight, so they replace HDDs to a certain extent. An Storage device based on Flash Memory has FTL(Flash Translation Layer) which emulate block storage devices like HDDs. A garbage collection, one of major functions of FTL, effects highly on the performance and the lifetime of devices. However, there is no de facto standard for new garbage collection algorithms. To solve this problem, we propose File Clustering Algorithm. File Clustering Algorithm respect to update page from same file at the same time. So, these are clustered to same block. For this mechanism, We propose Page Allocation Policy in FTL and use MIN-MAX GAP to guarantee wear leveling. To verify the algorithm in this paper, we use TPC Benchmark. So, The performance evaluation reveals that the proposed algorithm has comparable result with the existing algorithms(No wear leveling, Hot/Cold) and shows approximately 690% improvement in terms of the wear leveling.

A Study on Edit Order of Text Cells on the MS Excel Files (MS 엑셀 파일의 텍스트 셀 입력 순서에 관한 연구)

  • Lee, Yoonmi;Chung, Hyunji;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.2
    • /
    • pp.319-325
    • /
    • 2014
  • Since smart phones or tablet PCs have been widely used recently, the users can create and edit documents anywhere in real time. If the input and edit flows of documents can be traced, it can be used as evidence in digital forensic investigation. The typical document application is the MS(Microsoft) Office. As the MS Office applications consist of two file formats that Compound Document File Format which had been used from version 97 to 2003 and OOXML(Office Open XML) File Format which has been used from version 2007 to now. The studies on MS Office files were for making a decision whether the file has been tampered or not through detection of concealed items or analysis of documents properties so far. This paper analyzed the input order of text cells on MS Excel files and shows how to figure out what cell is the last edited in digital forensic perspective.

A Study of Verification Methods for File Carving Tools by Scenario-Based Image Creation (시나리오 기반 이미지 개발을 통한 파일 카빙 도구 검증 방안 연구)

  • Kim, Haeni;Kim, Jaeuk;Kwon, Taekyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.4
    • /
    • pp.835-845
    • /
    • 2019
  • File Carving is a technique for attempting to recover a file without metadata, such as a formated storage media or a damaged file system, and generally looks for a specific header / footer signature and data structure of the file. However, file carving is faced with the problem of recovering fragmented files for a long time, and it is very important to propose a solution for digital forensics because important files are relatively fragmented. To overcome these limitations, various carving techniques and tools are continuously being developed, and data sets from various researches and institutions are provided for functional verification. However, existing data sets are ineffective in verifying tools because of their limited environmental conditions. Therefore, this paper refers to the importance of fragmented file carving and develops 16 images for carving tool verification based on scenarios. The developed images' carving rate and accuracy of each media is shown through Foremost which is well known as a commercial carving tool.

A Design of a Flash Memory Swapping File System using LFM (LFM 기법을 이용한 플래시 메모리 스와핑 파일 시스템 설계)

  • Han, Dae-Man;Koo, Yong-Wan
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.47-58
    • /
    • 2005
  • There are two major type of flash memory products, namely, NAND-type and NOR-type flash memory. NOR-type flash memory is generally deployed as ROM BIOS code storage because if offers Byte I/O and fast read operation. However, NOR-type flash memory is more expensive than NAND-type flash memory in terms of the cost per byte ratio, and hence NAND type flash memory is more widely used as large data storage such as embedded Linux file systems. In this paper, we designed an efficient flash memory file system based an Embedded system and presented to make up for reduced to Swapping a weak System Performance to flash file system using NAND-type flash memory, then proposed Swapping algorithm insured to an Execution time. Based on Implementation and simulation studies, Then, We improved performance bases on NAND-type flash memory to the requirement of the embedded system.

  • PDF

Dynamic File Allocation Problems In Distributed Systems (분산 시스템의 동적 파일 할당 연구)

  • Seo, Pil-Kyo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1681-1693
    • /
    • 1997
  • In a distributed system, the simple file allocation problem determines the placement of copies of a file, so as to minimize the operating costs. The simple file allocation problem assumes the cost parameters to be fixed. In practice, these parameters change over time. In this research, dynamic file allocation problem for both single and multiple files are considered, which account for these changing parameters. A model for dynamic file allocation problem is formulated as a mixed integer program for which Lagrangian relaxation based branch-and-bound algorithm is developed. This algorithms is implemented and its efficiency is tested on medium to large test problems.

  • PDF

Performance Enhancement and Evaluation of Distributed File System for Cloud (클라우드 분산 파일 시스템 성능 개선 및 평가)

  • Lee, Jong Hyuk
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.11
    • /
    • pp.275-280
    • /
    • 2018
  • The choice of a suitable distributed file system is required for loading large data and high-speed processing through subsequent applications in a cloud environment. In this paper, we propose a write performance improvement method based on GlusterFS and evaluate the performance of MapRFS, CephFS and GlusterFS among existing distributed file systems in cloud environment. The write performance improvement method proposed in this paper enhances the response time by changing the synchronization level used by the synchronous replication method from disk to memory. Experimental results show that the distributed file system to which the proposed method is applied is superior to other distributed file systems in the case of sequential write, random write and random read.

Detecting TOCTOU Race Condition on UNIX Kernel Based File System through Binary Analysis (바이너리 분석을 통한 UNIX 커널 기반 File System의 TOCTOU Race Condition 탐지)

  • Lee, SeokWon;Jin, Wen-Hui;Oh, Heekuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.4
    • /
    • pp.701-713
    • /
    • 2021
  • Race Condition is a vulnerability in which two or more processes input or manipulate a common resource at the same time, resulting in unintended results. This vulnerability can lead to problems such as denial of service, elevation of privilege. When a vulnerability occurs in software, the relevant information is documented, but often the cause of the vulnerability or the source code is not disclosed. In this case, analysis at the binary level is necessary to detect the vulnerability. This paper aims to detect the Time-Of-Check Time-Of-Use (TOCTOU) Race Condition vulnerability of UNIX kernel-based File System at the binary level. So far, various detection techniques of static/dynamic analysis techniques have been studied for the vulnerability. Existing vulnerability detection tools using static analysis detect through source code analysis, and there are currently few studies conducted at the binary level. In this paper, we propose a method for detecting TOCTOU Race Condition in File System based on Control Flow Graph and Call Graph through Binary Analysis Platform (BAP), a binary static analysis tool.

Construction of rapid earthquake damage evaluation system - Real-time two-dimensional visualization of ground motion (지진신속피해평가시스템 구축 - 실시간 지진동의 2차원적 영상화)

  • 지헌철;전정수;이희일;박정호;임인섭
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2002.09a
    • /
    • pp.51-60
    • /
    • 2002
  • In this study we developed the visualization scheme of spatial ground-motion measurements in real time by using DSS data. Even though this scheme itself is useful for national earthquake mitigation plans, this scheme could be served as the crucial core for constructing rapid earthquake damage evaluation system. DSS is the abbreviation of Data Subscription Service and this is the pre-assigned request for the seismic stations to send very limited brief data with high priority and negligible transmission load. In addition to visualize the damage area with intensity, the corresponding epicenter can be estimated roughly for quick event alarm. For the interpolation of spatially irregular PGA data, the program, named as surface. of GMT was used with NetCDF grid file format. Since the grid file is similar to a postscript file, the program, called as shading, was coded with C language by using Matpak library in order to convert grid files into image files.

  • PDF

Benchmarks for Performance Testing of MPI-IO on the General Parallel File System (범용 병렬화일 시스템 상에서 MPI-IO 방안의 성능 평가 벤티마크)

  • Park, Seong-Sun
    • The KIPS Transactions:PartA
    • /
    • v.8A no.2
    • /
    • pp.125-132
    • /
    • 2001
  • IBM developed the MPI-IO, we call it MPI-2, on the General Parallel File System. We designed and implemented various Matrix Multiplication Benchmarks to evaluate its performances. The MPI-IO on the General Parallel File System shows four kinds of data access methods : the non-collective and blocking, the collective and blocking, the non-collective and non-blocking, and the split collective operation. In this paper, we propose benchmarks to measure the IO time and the computation time for the data access methods. We describe not only its implementation but also the performance evaluation results.

  • PDF

An efficient caching scheme at replacing a dirty block for softwre RAID filte systems (소프트웨어 RAID 파일 시스템에서 오손 블록 교체시에 효율적인 캐슁 기법)

  • 김종훈;노삼혁;원유헌
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1599-1606
    • /
    • 1997
  • The software RAID file system is defined as the system which distributes data redundantly across an aray of disks attached to each workstations connected on a high-speed network. This provides high throughput as well as higher availability. In this paper, we present an efficient caching scheme for the software RAID filte system. The performance of this schmem is compared to two other schemes previously proposed for convnetional file systems and adapted for the software RAID file system. As in hardware RAID systems, small-writes to be the performance bottleneck in softwre RAID filte systems. To tackle this problem, we logically divide the cache into two levels. By keeping old data and parity val7ues in the second-level cache we were able to eliminate much of the extra disk reads and writes necessary for write-back of dirty blocks. Using track driven simulations we show that the proposed scheme improves performance for both the average response time and the average system busy time.

  • PDF