Browse > Article
http://dx.doi.org/10.7472/jksii.2017.18.2.53

Research for Efficient Massive File I/O on Parallel Programs  

Hwang, Gyuhyeon (IT Division, U2Bio Co.)
Kim, Youngtae (Department of Computer Engineering, Gangneung-Wonju National University)
Publication Information
Journal of Internet Computing and Services / v.18, no.2, 2017 , pp. 53-60 More about this Journal
Abstract
Since processors are handling inputs and outputs independently on distributed memory computers, different file input/output methods are used. In this paper, we implemented and compared various file I/O methods to show their efficiency on distributed memory parallel computers. The implemented I/O systems are as following: (i) parallel I/O using NFS, (ii) sequential I/O on the host processor and domain decomposition, (iii) MPI-IO. For performance analysis, we used a separated file server and multiple processors on one or two computational servers. The results show the file I/O with NFS for inputs and sequential output with domain composition for outputs are best efficient respectively. The MPI-IO result shows unexpectedly the lowest performance.
Keywords
Parallel I/O; Collective I/O; Distributed memory computer; MPI-IO; NFS;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
연도 인용수 순위
1 K. Cha and H. Cho, "An Analysis of Collective Buffer Effects on Collective I/O", J. of KIISE: Computing Practices and Letters, Vol. 19, No. 4, pp. 214-218, Apr. 2013. http://ksci.kisti.re.kr/browse/brow Detail.ksci?kojic=&browseBean.totalCnt=11&browseBean.atclMgntNo=JBGHIF_2013_v19n4_214&browseBean.curNo=9
2 S. Park, "Benchmark for Performance Testing of MPI-IO on the General Parallel File System", J. of IPS, Vol. 8-A, No. 2, pp. 125-132, June. 2001. http://journals.kips.or.kr/digital-library/kipsa/6167
3 Hashema. I., I. Yaqooba, N. Anuara, S. Mokhtara, A. Gania and S. Khanb, "ig data" on cloud computing: Review and open research issues", Information Systems: 47, pp. 98-115. ACM, 2009. http://www.sci encedirect.com/science/article/pii/S0306437914001288
4 Y. Kim, "High Performance Computing Classes (HPCC) for Parallel Fortran Programs using Message Passing", J. of KIISE Vol. 38, No. 2, pp. 59-66, Apr. 2011. http://www.koreascience.or.kr/article/ArticleFull Record.jsp?cn=JBGHG6_2011_v38n2_59
5 Y. Kim, Y. Choi, and H. Park,"GP-GPU based Parallelization for Urban Terrain Atmospheric Model CFD_NIMR", J. of ICS, Vol. 12, No. 2, pp. 41-47, Apr. 2014. http://www.koreascience.or.kr/article/Article FullRecord.jsp?cn=OTJBCD_2014_v15n2_41
6 G. C. Fox, M. Johnson, G. Lyzenga, S. Otto, J. Salmon, and D. Walker, "Solving Problems On Concurrent Processors Volume I: General Techniques and Regular Problem". Englewood Cliffs: Prentice-Hall, 1998. http://dl.acm.org/citation.cfm?id=43389
7 Y. Kim, "Performance Comparison of Two Parallel LU Decomposition Algorithms on MasPar Machines", Journal of IEEE Korea Council, Vol. 2, No. 2, pp. 247-255, 1999. http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=JGGJB@_1998_v2n2s3_247
8 C. Lever and P. Honeyman, "Linux NFS Client Write Performance", Proceeding of USENIX 2002 Annual Technical Conference, pp. 29-40. 2002. http://www.citi.umich.edu/projects/nfs-perf/results/cel/write-throughput.html
9 "MPI: A Message-Passing Interface Standard Version 3.1", Message Passing Interface Forum, June 2015. http://mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
10 W. Gropp, E. Lusk, and R. Thakur. "Using MPI-2 - Advanced Features of the Message Passing Interface". Massachusetts Institute of Technology, second edition, 1999. http://www.mcs.anl.gov/research/projects/mpi/usingmpi2
11 P. Dickens, and J. Logan, "Y-lib: a user level library to increase the performance of MPI-IO in a lustre file system environment", Proceedings of 2009 ACM International Symposium on High Performance Distributed Computing (HPDC'09), pp. 32-38. ACM, 2009. https://pdfs.semanticscholar.org/dc01/47fe0aa01f06dbc7df8b2046be23daf5e18d.pd