• Title/Summary/Keyword: file I/O

Search Result 165, Processing Time 0.026 seconds

Management Technique of Buffer Cache for Rendering Systems (렌더링 시스템을 위한 버퍼캐쉬 관리기법)

  • Shin, Donghee;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.155-160
    • /
    • 2018
  • In this paper, we found that buffer cache in general systems does not perform well in rendering software, and presented a new buffer cache management scheme that resolves this problem. To do so, we collected various file I/O traces of rending software and analyzed their characteristics. From this analysis, we observed that file I/Os in rendering consist of long loops, short loops, random accesses, and write-once accesses. Based on this observation, we presented a buffer cache management scheme that allocates cache space to each access types and manages them appropriately, thereby improving the buffer cache performances by 19% on average and up to 55%.

A Study on I/O performance analysis Architectures for file system based on SSD (리눅스 기반의 SSD 상에서 동작하는 파일 시스템의 I/O 분석 모듈 설계)

  • So-Yeon Kim;Chi-Hyun Park;Hong-Chan Roh;Sang-Hyun Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.257-260
    • /
    • 2008
  • SSD는 하드디스크와 다른 구조를 갖고 있으며, 단편화된 잦은 쓰기 연산에 취약하다는 점 등 기존의 환경과는 차이가 있기 때문에 이런 환경에서 발생하는 연산, 특히 I/O 연산에 대한 분석이 필수적이다. 기존의 I/O 연산 측정도구로 사용되던 벤치마크를 이용하여 SSD의 성능을 측정할 경우에는 상위단계에서의 읽기, 쓰기, 성능만 분석되기 어렵기 때문에 하위단계에서 실제로 SSD상의 I/O연산의 수행 성능을 정확히 측정하기 어렵다. SSD는 내부 저장 알고리즘의 효율성에 따라서 성능 차이가 크기 때문에 정확한 성능을 측정할 수 있는 분석 도구가 필요하다. 본 논문에서는 SSD상에서의 I/O연산의 계층적 분석을 위한 모듈을 제안한다.

Design and Implementation of Autonomic De-fragmentation for File System Aging (파일 시스템 노화를 해소하기 위한 자동적인 단편화 해결 시스템의 설계와 구현)

  • Lee, Jun-Seok;Park, Hyun-Chan;Yoo, Chuck
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.101-112
    • /
    • 2009
  • Existing techniques for defragmentation of the file system need intensive disk operation for some periods at specific time such as disk defragmentation program. In this paper, for solving this problem, we design and implement the automatic and continuous defragmentation free system by distributing the disk operation. We propose the Automatic Layout Scoring(ALS) mechanism for measuring defragmentation degree and suggest the Lazy Copy mechanism that copies the defragmented data at idle time for scattering the disk operation. We search the defragmented file by Automatic Layout Scoring mechanism and then find for empty spaces for that searched file. After lazy copy of searched fils to empty space for preventing that file from being lost, the algorithm solves the defragmentation problem by updating the I-node of that file. We implement these algorithms in Linux and evaluate them for small and defragmented file to get the layout scoring. We outperform the Linux EXT2 file system by $2.4%{\sim}10.4%$ in layout scoring evaluation. And the performance of read and write for various file size is better than the EXT2 by $1%{\sim}8.5%$ for write performance and by $1.2%{\sim}7.5%$ for read performance. We suggest this system for solving the problem of defragmentation automatically without disturbing the I/O task and manual management.

An Efficient Encryption/Decryption Approach to Improve the Performance of Cryptographic File System in Embedded System (내장형 시스템에서 암호화 파일 시스템을 위한 효율적인 암복호화 기법)

  • Heo, Jun-Young;Park, Jae-Min;Cho, Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.2
    • /
    • pp.66-74
    • /
    • 2008
  • Since modem embedded systems need to access, manipulate or store sensitive information, it requires being equipped with cryptographic file systems. However, cryptographic file systems result in poor performance so that they have not been widely adapted to embedded systems. Most cryptographic file systems degrade the performance unnecessarily because of system architecture. This paper proposes ISEA (Indexed and Separated Encryption Approach) that supports for encryption/decryption in system architecture and removes redundant performance loss. ISEA carries out encryption and decryption at different layers according to page cache layer. Encryption is carried out at lower layer than page cache layer while decryption at upper layer. ISEA stores the decrypted data in page cache so that it can be reused in followed I/O request without decryption. ISEA provides page-indexing which divides page cache into cipher blocks and manages it by a block. It decrypts pages partially so that it can eliminate unnecessary decryption. In synthesized experiment of read/write with various cache hit rates, it gives results suggesting that ISEA has improved the performance of encryption file system efficiently.

Performance Optimization in GlusterFS on SSDs (SSD 환경 아래에서 GlusterFS 성능 최적화)

  • Kim, Deoksang;Eom, Hyeonsang;Yeom, Heonyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.95-100
    • /
    • 2016
  • In the current era of big data and cloud computing, the amount of data utilized is increasing, and various systems to process this big data rapidly are being developed. A distributed file system is often used to store the data, and glusterFS is one of popular distributed file systems. As computer technology has advanced, NAND flash SSDs (Solid State Drives), which are high performance storage devices, have become cheaper. For this reason, datacenter operators attempt to use SSDs in their systems. They also try to install glusterFS on SSDs. However, since the glusterFS is designed to use HDDs (Hard Disk Drives), when SSDs are used instead of HDDs, the performance is degraded due to structural problems. The problems include the use of I/O-cache, Read-ahead, and Write-behind Translators. By removing these features that do not fit SSDs which are advantageous for random I/O, we have achieved performance improvements, by up to 255% in the case of 4KB random reads, and by up to 50% in the case of 64KB random reads.

Analyses of the Effect of System Environment on Filebench Benchmark (시스템 환경이 Filebench 벤치마크에 미치는 영향 분석)

  • Song, Yongju;Kim, Junghoon;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.411-418
    • /
    • 2016
  • In recent times, NAND flash memory has become widely used as secondary storage for computing devices. Accordingly, to take advantage of NAND flash memory, new file systems have been actively studied and proposed. The performance of these file systems is generally measured with benchmark tools. However, since benchmark tools are executed by software simulation methods, many researchers get non-uniform benchmark results depending on the system environments. In this paper, we use Filebench, one of the most popular and representative benchmark tools, to analyze benchmark results and study the reasons why the benchmark result variations occur. Our experimental results show the differences in benchmark results depending on the system environments. In addition, this study substantiates the fact that system performance is affected mainly by background I/O requests and fsync operations.

Design and Implementation of a Metadata Structure for Large-Scale Shared-Disk File System (대용량 공유디스크 파일 시스템에 적합한 메타 데이타 구조의 설계 및 구현)

  • 이용주;김경배;신범주
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.33-49
    • /
    • 2003
  • Recently, there have been large storage demands for manipulating multimedia data. To solve the tremendous storage demands, one of the major researches is the SAN(Storage Area Network) that provides the local file requests directly from shared-disk storage and also eliminates the server bottlenecks to performance and availability. SAN also improve the network latency and bandwidth through new channel interface like FC(Fibre Channel). But to manipulate the efficient storage network like SAN, traditional local file system and distributed file system are not adaptable and also are lack of researches in terms of a metadata structure for large-scale inode object such as file and directory. In this paper, we describe the architecture and design issues of our shared-disk file system and provide the efficient bitmap for providing the well-formed block allocation in each host, extent-based semi flat structure for storing large-scale file data, and two-phase directory structure of using Extendible Hashing. Also we describe a detailed algorithm for implementing the file system's device driver in Linux Kernel and compare our file system with the general file system like EXT2 and shard disk file system like GFS in terms of file creation, directory creation and I/O rate.

An Empirical Study on Linux I/O stack for the Lifetime of SSD Perspective (SSD 수명 관점에서 리눅스 I/O 스택에 대한 실험적 분석)

  • Jeong, Nam Ki;Han, Tae Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.54-62
    • /
    • 2015
  • Although NAND flash-based SSD (Solid-State Drive) provides superior performance in comparison to HDD (Hard Disk Drive), it has a major drawback in write endurance. As a result, the lifetime of SSD is determined by the workload and thus it becomes a big challenge in current technology trend of such as the shifting from SLC (Single Level Cell) to MLC (Multi Level cell) and even TLC (Triple Level Cell). Most previous studies have dealt with wear-leveling or improving SSD lifetime regarding hardware architecture. In this paper, we propose the optimal configuration of host I/O stack focusing on file system, I/O scheduler, and link power management using JEDEC enterprise workloads in terms of WAF (Write Amplification Factor) which represents the efficiency perspective of SSD life time especially for host write processing into flash memory. Experimental analysis shows that the optimum configuration of I/O stack for the perspective of SSD lifetime is MinPower-Dead-XFS which prolongs the lifetime of SSD approximately 2.6 times in comparison with MaxPower-Cfq-Ext4, the best performance combination. Though the performance was reduced by 13%, this contributions demonstrates a considerable aspect of SSD lifetime in relation to I/O stack optimization.

An Efficient Cleaning Scheme for File Defragmentation on Log-Structured File System (로그 구조 파일 시스템의 파일 단편화 해소를 위한 클리닝 기법)

  • Park, Jonggyu;Kang, Dong Hyun;Seo, Euiseong;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.627-635
    • /
    • 2016
  • When many processes issue write operations alternately on Log-structured File System (LFS), the created files can be fragmented on the file system layer although LFS sequentially allocates new blocks of each process. Unfortunately, this file fragmentation degrades read performance because it increases the number of block I/Os. Additionally, read-ahead operations which increase the number of data to request at a time exacerbates the performance degradation. In this paper, we suggest a new cleaning method on LFS that minimizes file fragmentation. During a cleaning process of LFS, our method sorts valid data blocks by inode numbers before copying the valid blocks to a new segment. This sorting re-locates fragmented blocks contiguously. Our cleaning method experimentally eliminates 60% of file fragmentation as compared to file fragmentation before cleaning. Consequently, our cleaning method improves sequential read throughput by 21% when read-ahead is applied.

The development of the high effective and stoppageless file system for high performance computing (High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현)

  • Park, Yeong-Bae;Choe, Seung-Hwan;Lee, Sang-Ho;Kim, Gyeong-Su;Gong, Yong-Jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF