• Title/Summary/Keyword: I/O 성능

Search Result 677, Processing Time 0.035 seconds

A Study of HDD Performance Improvement through Filter Driver & NAND FLASH Memory (Filter Driver 와 NAND FLASH Memory를 이용한 HDD 장치의 성능 개선에 관한 연구)

  • Kim, Jae-Kyung;Kim, Woo-Gil;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1635-1641
    • /
    • 2011
  • In this paper, we research the method for HDD I/O Performance improvement by Filter Driver & NAND FLASH Memory. This paper was started from NAND Flash Memory can not be replaced by HDD because of high cost. So We consider that using NAND Flash Memory as cache for HDD. It can be achieved high HDD Performance through Filter Driver by low cost.

A Study on Virtual Machine Consolidation According to DISK I/O Performance (디스크 I/O 성능에 따른 가상 서버 통합에 대한 고찰)

  • Han, Sung-Geun;Shin, Young-Ho;Kim, Gyu-Seok;Kim, Joong-Baek;Kim, Joo-Yeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.1599-1602
    • /
    • 2012
  • 스마트폰이나 태블릿PC와 같은 모바일기기 보급의 확산으로 모바일 클라우드 컴퓨팅이 발전하고 있다. 이와 같은 클라우드 컴퓨팅의 핵심 기술은 가상화 기술이며 서버 가상화가 근간을 이룬다. 가상 서버는 물리 서버의 성능 이상을 추구하고 있으며 디스크 I/O에 따라 성능이 크게 좌우된다. 본 논문에서는 가상 서버 상에서 NAS, Local SAS, PCI-SSD와 같은 다양한 디스크에 대한 I/O 성능을 테스트하였고, 이를 근거로 디스크 I/O 성능에 따른 가상 서버 통합에 대해 고찰하였다.

Browser I/O Patterns of Android Devices Analysis and Improvement Using Linux Kernel Block I/O Profiling Techniques (리눅스 커널 블록 I/O 패턴 Profiling 기법을 이용한 안드로이드 장치의 Browser I/O 패턴 분석과 개선 방안)

  • Jang, Bo-Gil;Lee, Sung-Woo;Lim, Seung-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.30-32
    • /
    • 2011
  • 현재 컴퓨터 시스템에서 대표적인 성능 저하가 발생되는 부분은 블록 I/O 시스템이다. 안드로이드와 같은 모바일 장치 또한 위와 같은 성능 이슈를 가지고 있다. 본 논문에서는 리눅스 블록 레이어의 I/O를 tracing 해주는 blktrace를 안드로이드 장치에 적용하여 SQLite를 사용하는 Web Browsing 시의 I/O 패턴 분석과 성능 개선 방안을 제시 한다.

Dynamic Core Affinity for High-Performance I/O Devices Supporting Multiple Queues (다중 큐를 지원하는 고속 I/O 장치를 위한 동적 코어 친화도)

  • Cho, Joong-Yeon;Uhm, Junyong;Jin, Hyun-Wook;Jung, Sungin
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.736-743
    • /
    • 2016
  • Several studies have reported the impact of core affinity on the network I/O performance of multi-core systems. As the network bandwidth increases significantly, it becomes more important to determine the effective core affinity. Although a framework for dynamic core affinity that considers both network and disk I/O has been suggested, the multiple queues provided by high-speed I/O devices are not properly supported. In this paper, we extend the existing framework of dynamic core affinity to efficiently support the multiple queues of high-speed I/O devices, such as 40 Gigabit Ethernet and NVM Express. Our experimental results show that the extended framework can improve the HDFS file upload throughput by up to 32%, and can provide improved scalability in terms of the number of cores. In addition, we analyze the impact of the assignment policy of multiple I/O queues across a number of cores.

Performance Evaluation and Analysis on Single and Multi-Network Virtualization Systems with Virtio and SR-IOV (가상화 시스템에서 Virtio와 SR-IOV 적용에 대한 단일 및 다중 네트워크 성능 평가 및 분석)

  • Jaehak Lee;Jongbeom Lim;Heonchang Yu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.48-59
    • /
    • 2024
  • As functions that support virtualization on their own in hardware are developed, user applications having various workloads are operating efficiently in the virtualization system. SR-IOV is a virtualization support function that takes direct access to PCI devices, thus giving a high I/O performance by minimizing the need for hypervisor or operating system interventions. With SR-IOV, network I/O acceleration can be realized in virtualization systems that have relatively long I/O paths compared to bare-metal systems and frequent context switches between the user area and kernel area. To take performance advantages of SR-IOV, network resource management policies that can derive optimal network performance when SR-IOV is applied to an instance such as a virtual machine(VM) or container are being actively studied.This paper evaluates and analyzes the network performance of SR-IOV implementing I/O acceleration is compared with Virtio in terms of 1) network delay, 2) network throughput, 3) network fairness, 4) performance interference, and 5) multi-network. The contributions of this paper are as follows. First, the network I/O process of Virtio and SR-IOV was clearly explained in the virtualization system, and second, the evaluation results of the network performance of Virtio and SR-IOV were analyzed based on various performance metrics. Third, the system overhead and the possibility of optimization for the SR-IOV network in a virtualization system with high VM density were experimentally confirmed. The experimental results and analysis of the paper are expected to be referenced in the network resource management policy for virtualization systems that operate network-intensive services such as smart factories, connected cars, deep learning inference models, and crowdsourcing.

Improving the I/O Performance of Disk-Based Graph Engine by Graph Ordering (디스크 기반 그래프 엔진의 입출력 성능 향상을 위한 그래프 오더링)

  • Lim, Keunhak;Kim, Junghyun;Lee, Eunjae;Seo, Jiwon
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.1
    • /
    • pp.40-45
    • /
    • 2018
  • With the advent of big data and social networks, large-scale graph processing becomes popular research topic. Recently, an optimization technique called Gorder has been proposed to improve the performance of in-memory graph processing. This technique improves performance by optimizing the graph layout on memory to have better cache locality. However, since it is designed for in-memory graph processing systems, the technique is not suitable for disk-based graph engines; also the cost for applying the technique is significantly high. To solve the problem, we propose a new graph ordering called I/O Order. I/O Order considers the characteristics of I/O accesses for SSDs and HDDs to improve the performance of disk-based graph engine. In addition, the algorithmic complexity of I/O Order is simple compared to Gorder, hence it is cheaper to apply I/O Ordering. I/O order reduces the cost of pre-processing up to 9.6 times compared to that of Gorder's, still its performance is 2 times higher compared to the Random in low-locality graph algorithms.

Optimizing I/O Stack for Fast Storage Devices (고속 저장 장치를 위한 입출력 스택 최적화)

  • Han, Hyuck
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.251-258
    • /
    • 2016
  • Recently, the demand for fast storage devices is rapidly increasing in cloud platforms, social network services, etc. Despite the development of fast storage devices, the traditional Linux I/O stack is not able to exploit the full extent of the performance improvement since it has been optimized for disk-based storage devices. In this paper, we propose an optimized I/O stack which can fully utilize the I/O bandwidth and latency of fast storage devices. To this end, we design a new I/O interface to replace the current block I/O interface and optimize our I/O interface. Our optimized I/O interface bypasses operations/layers in block I/O subsystems of the current Linux I/O stack to fully exploit fast storage devices. We also optimize the Linux file systems such as ext2 and ext4 to run on our I/O interface. We evaluate our I/O stack with multiple benchmarks and the experimental results show that our I/O stack achieves 1.7 times better throughput compared to traditional Linux I/O stack.

Analyses of the Effect of System Environment on Filebench Benchmark (시스템 환경이 Filebench 벤치마크에 미치는 영향 분석)

  • Song, Yongju;Kim, Junghoon;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.411-418
    • /
    • 2016
  • In recent times, NAND flash memory has become widely used as secondary storage for computing devices. Accordingly, to take advantage of NAND flash memory, new file systems have been actively studied and proposed. The performance of these file systems is generally measured with benchmark tools. However, since benchmark tools are executed by software simulation methods, many researchers get non-uniform benchmark results depending on the system environments. In this paper, we use Filebench, one of the most popular and representative benchmark tools, to analyze benchmark results and study the reasons why the benchmark result variations occur. Our experimental results show the differences in benchmark results depending on the system environments. In addition, this study substantiates the fact that system performance is affected mainly by background I/O requests and fsync operations.

A Performance Analysis of I/O Scheduler for NAND Flash File System (NAND 플래시 파일시스템의 I/O 스케줄러 성능분석)

  • Lee, Yeongseok;Lee, Changhee;Chung, Kyungho;Kim, Yonghwan;Ahn, Kwangseon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.18 no.2
    • /
    • pp.27-34
    • /
    • 2013
  • NAND Flash Memory has been used in several devices by low cost and high capacity, and the demand for mass NAND Flash Memory has increased due to the multimedia extension of mobile devices. The JFFS2, NILFS2, and YAFFS2 file systems are used mainly in NAND Flash Memory. In this paper, the performance of Sequential read/write of the 3 file systems are analyzed for the 4 I/O schedulers : CFQ(Complete Fair Queuing) I/O scheduler, NOOP(No Operation) I/O scheduler, Anticipatory I/O scheduler, and Deadline I/O scheduler. In JFFS2 file system, Anticipatory I/O scheduler has the best performance by 8% decreasing speed in writing time and 1.5% decreasing speed in reading time compared to the other I/O scheduler. In YAFFS2 file system, it results are similar to performance in reading and writing for the 4 I/O schedulers. In NILFS2 file system, NOOP I/O scheduler has 2% faster in writing and Deadline I/O scheduler has 6% faster in reading than other I/O schedulers.

Improving the Read Performance of OneNAND Flash Memory using Virtual I/O Segment (가상 I/O 세그먼트를 이용한 OneNAND 플래시 메모리의 읽기 성능 향상 기법)

  • Hyun, Seung-Hwan;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.636-645
    • /
    • 2008
  • OneNAND flash is a high-performance hybrid flash memory that combines the advantages of both NAND flash and NOR flash. OneNAND flash has not only all virtues of NAND flash but also greatly enhanced read performance which is considered as a downside of NAND flash. As a result, it is widely used in mobile applications such as mobile phones, digital cameras, PMP, and portable game players. However, most of the general purpose operating systems, such as Linux, can not exploit the read performance of OneNAND flash because of the restrictions imposed by their virtual memory system and block I/O architecture. In order to solve that problem, we suggest a new approach called virtual I/O segment. By using virtual I/O segment, the superior read performance of OneNAND flash can be exploited without modifying the existing block I/O architecture and MTD subsystem. Experiments by implementations show that this approach can reduce read latency of OneNAND flash as much as 54%.