• Title/Summary/Keyword: distributed file system

Search Result 251, Processing Time 0.03 seconds

A File Merging Scheme for Efficient Handling of Small Files in Hadoop Distributed File System (Hadoop Distribute file system에서 Small file을 효과적으로 처리하기 위한 파일 병합 기법 연구)

  • Park, Jong-Chang;Youn, Hee-Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.15-17
    • /
    • 2013
  • HDFS(Hadoop Distribute File System)는 대용량 파일 처리를 목적으로 설계 되었으며 현재 이상적인 분산 파일 시스템으로 각광 받고 있다. 이러한 HDFS는 기존 분산파일 시스템과 많은 유사성을 가지고 있으나, Fault Tolerance를 제공하고, 데이터 엑세스 패턴을 스트리밍 방식으로 지원하여 대용량 파일을 효율적으로 저장할 수 있다는 차별성을 가지고 있다. 하지만 실제 HDFS 데이터 집합에는 Small file이 차지하는 비중이 상당히 높으며, 이러한 다수의 Small file 은 데이터 처리에 있어 높은 비용을 초래할 뿐 아니라 Master Node 의 파일 처리 및 메모리 성능에 악영향을 미친다. 따라서 본 논문에서는 HDFS에서 Small file 이 미치는 영향을 분석하고 이러한 문제점을 해결 할 수 있는 로컬 인덱스 파일기반의 파일 병합 기법을 제안한다.

Torus Network Based Distributed Storage System for Massive Multimedia Contents (토러스 연결망 기반의 대용량 멀티미디어용 분산 스토리지 시스템)

  • Kim, Cheiyol;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1487-1497
    • /
    • 2016
  • Explosively growing service of digital multimedia data increases the need for highly scalable low-cost storage. This paper proposes the new storage architecture based on torus network which does not need network switch and erasure coding for efficient storage usage for high scalability and efficient disk utilization. The proposed model has to compensate for the disadvantage of long network latency and network processing overhead of torus network. The proposed storage model was compared to two most popular distributed file system, GlusterFS and Ceph distributed file systems through a prototype implementation. The performance of prototype system shows outstanding results than erasure coding policy of two file systems and mostly even better results than replication policy of them.

A Secure Model for Reading and Writing in Hadoop Distributed File System and its Evaluation (하둡 분산파일시스템에서 안전한 쓰기, 읽기 모델과 평가)

  • Pang, Sechung;Ra, Ilkyeun;Kim, Yangwoo
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.55-64
    • /
    • 2012
  • Nowadays, as Cloud computing becomes popular, a need for a DFS(distributed file system) is increased. But, in the current Cloud computing environments, there is no DFS framework that is sufficient to protect sensitive private information from attackers. Therefore, we designed and proposed a secure scheme for distributed file systems. The scheme provides confidentiality and availability for a distributed file system using a secret sharing method. In this paper, we measured the speed of encryption and decryption for our proposed method, and compared them with that of SEED algorithm which is the most popular algorithm in this field. This comparison showed the computational efficiency of our method. Moreover, the proposed secure read/write model is independent of Hadoop DFS structure so that our modified algorithm can be easily adapted for use in the HDFS. Finally, the proposed model is evaluated theoretically using performance measurement method for distributed secret sharing model.

Development of HDF Browser for the Utilization of EOC Imagery

  • Seo, Hee-Kyung;Ahn, Seok-Beom;Park, Eun-Chul;Hahn, Kwang-Soo;Choi, Joon-Soo;Kim, Choen
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.1
    • /
    • pp.61-69
    • /
    • 2002
  • The purpose of Electro-Optical Camera (EOC), the primary payload of KOMPSAT-1, is to collect high resolution visible imagery of the Earth including Korean Peninsula. EOC images will be distributed to the public or many user groups including government, public corporations, academic or research institutes. KARI will offer the online service to the users through internet. Some application, e.g., generation of Digital Elevation Model (DEM), needs a secondary data such as satellite ephemeris data, attitude data to process the EOC imagery. EOC imagery with these ancillary information will be distributed in a file of Hierarchical Data Format (HDF) file formal. HDF is a physical file format that allows storage of many different types of scientific data including images, multidimensional data arrays, record oriented data, and point data. By the lack of public domain softwares supporting HDF file format, many public users may not access EOC data without difficulty. The purpose of this research is to develop a browsing system of EOC data for the general users not only for scientists who are the main users of HDF. The system is PC-based and huts user-friendly interface.

File System Snapshot (파일 시스템 스냅샷)

  • Suk, Jin-Sun;No, Jae-Chun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.88-95
    • /
    • 2010
  • As the development of IT technologies, storages stored very sensitive data which should not be damaged, too. It increased the importance of data backup and makes the time need to backup data important issues. Snapshot is one of the backup technologies which needs short downtime to maintain consistency of data during backup data. In this paper, we studied two kinds of snapshots, local file system based snapshot and network file system based snapshot. In the local file system based snapshot part, we propose the PSnap which is a snapshot library for non-snapshot file system as like Ext2, Ext3 and XFS. In another part, network file system based snapshot, we propose the GlorySnap which snapshot utilities for GloryFS is a distributed file system was made by ETRI.

A Design for a Hyperledger Fabric Blockchain-Based Patch-Management System

  • Song, Kyoung-Tack;Kim, Shee-Ihn;Kim, Seung-Hee
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.301-317
    • /
    • 2020
  • An enterprise patch-management system (PMS) typically supplies a single point of failure (SPOF) of centralization structure. However, a Blockchain system offers features of decentralization, transaction integrity, user certification, and a smart chaincode. This study proposes a Hyperledger Fabric Blockchain-based distributed patch-management system and verifies its technological feasibility through prototyping, so that all participating users can be protected from various threats. In particular, by adopting a private chain for patch file set management, it is designed as a Blockchain system that can enhance security, log management, latest status supervision and monitoring functions. In addition, it uses a Hyperledger Fabric that owns a practical Byzantine fault tolerant consensus algorithm, and implements the functions of upload patch file set, download patch file set, and audit patch file history, which are major features of PMS, as a smart contract (chaincode), and verified this operation. The distributed ledger structure of Blockchain-based PMS can be a solution for distributor and client authentication and forgery problems, SPOF problem, and distribution record reliability problem. It not only presents an alternative to dealing with central management server loads and failures, but it also provides a higher level of security and availability.

BeanFS: A Distributed File System for Large-scale E-mail Services (BeanFS: 대규모 이메일 서비스를 위한 분산 파일 시스템)

  • Jung, Wook;Lee, Dae-Woo;Park, Eun-Ji;Lee, Young-Jae;Kim, Sang-Hoon;Kim, Jin-Soo;Kim, Tae-Woong;Jun, Sung-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.247-258
    • /
    • 2009
  • Distributed file systems running on a cluster of inexpensive commodity hardware are being recognized as an effective solution to support the explosive growth of storage demand in large-scale Internet service companies. This paper presents the design and implementation of BeanFS, a distributed file system for large-scale e-mail services. BeanFS is adapted to e-mail services as follows. First, the volume-based replication scheme alleviates the metadata management overhead of the central metadata server in dealing with a very large number of small files. Second, BeanFS employs a light-weighted consistency maintenance protocol tailored to simple access patterns of e-mail message. Third, transient and permanent failures are treated separately and recovering from transient failures is done quickly and has less overhead.

Performance Analysis of Distributed Hadoop Systems (분산 하둡 시스템의 성능 비교 분석)

  • Bae, Byoung-Jin;Kim, Young-Joo;Kim, Young-Kuk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.479-482
    • /
    • 2014
  • Nowadays open-source hadoop systems have been using widely to efficiently manage a fast-growing big data. Hadoop systems consist of distributed file processing system called HDFS (Hadoop Distributed File System) and distributed parallel processing system called MapReduce. The MapReduce reads and processes big data from HDFS and then processed results are written in HDFS again by the MapReduce. Such a processing method has different system structure respectively according to hadoop version. Therefore, this paper shows analysis results for performance of hadoop systems. For this, we devise a way which monitors hadoop systems and measure occurrence frequency of processes, threads, and variables generated in hadoop system itself using the devised way. So, by using the measured results as analysis indicator, we help the indicator predict inner performance of hadoop systems.

  • PDF

Data Access Frequency based Data Replication Method using Erasure Codes in Cloud Storage System (클라우드 스토리지 시스템에서 데이터 접근빈도와 Erasure Codes를 이용한 데이터 복제 기법)

  • Kim, Ju-Kyeong;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.85-91
    • /
    • 2014
  • Cloud storage system uses a distributed file system for storing and managing data. Traditional distributed file system makes a triplication of data in order to restore data loss in disk failure. However, enforcing data replication method increases storage utilization and causes extra I/O operations during replication process. In this paper, we propose a data replication method using erasure codes in cloud storage system to improve storage space efficiency and I/O performance. In particular, according to data access frequency, the proposed method can reduce the number of data replications but using erasure codes can keep the same data recovery performance. Experimental results show that proposed method improves performance in storage efficiency 40%, read throughput 11%, write throughput 10% better than HDFS does.

MAHA-FS : A Distributed File System for High Performance Metadata Processing and Random IO (MAHA-FS : 고성능 메타데이터 처리 및 랜덤 입출력을 위한 분산 파일 시스템)

  • Kim, Young Chang;Kim, Dong Oh;Kim, Hong Yeon;Kim, Young Kyun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.91-96
    • /
    • 2013
  • The application field of supercomputing systems are changing to support into the field for both a large-volume data processing and high-performance computing at the same time such as bio-applications. These applications require high-performance distributed file system for storage management and efficient high-speed processing of large amounts of data that occurs. In this paper, we introduce MAHA-FS for supercomputing systems for processing large amounts of data and high-performance computing, providing excellent metadata operation performance and IO performance. It is shown through performance analysis that MAHA-FS provides excellent performance in terms of the metadata processing and random IO processing.