• Title/Summary/Keyword: 대용량 스토리지

Search Result 90, Processing Time 0.034 seconds

A method of Securing Mass Storage for SQL Server by Sharing Network Disks - on the Amazon EC2 Windows Environments - (네트워크 디스크를 공유하여 SQL 서버의 대용량 스토리지 확보 방법 - Amazon EC2 Windows 환경에서 -)

  • Kang, Sungwook;Choi, Jungsun;Choi, Jaeyoung
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.1-9
    • /
    • 2016
  • Users are provided infrastructure such as CPU, memory, network, and storage as IaaS (Infrastructure as a Service) service on cloud computing environments. However storage instances cannot support the maximum storage capacity that SQL servers can use, because the capacity of instances provided by service providers is usually limited. In this paper, we propose a method of securing mass storage capacity for SQL servers by sharing network disks with limited storage capacity. We confirmed through experiments that it is possible to secure mass storage capacity, which exceeds the maximum storage capacity provided by an instance with Amazon EBS on Amazon EC2 Windows environments, and it is possible to improve the overall performance of the SQL servers by increasing the disk capacity and performance.

A Study on the Application of Zero Copy Technology to Improve the Transmission Efficiency and Recording Performance of Massive Data (대용량 데이터의 전송 효율 및 기록 성능 향상을 위한 Zero Copy 기술 적용에 관한 연구)

  • Song, Min-Gyu;Kim, Hyo-Ryoung;Kang, Yong-Woo;Je, Do-Heung;Wi, Seog-Oh;Lee, Sung-Mo;Kim, Seung-Rae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1133-1144
    • /
    • 2021
  • Zero-copy is a technology that is also called no-memory copy, and through its use, context switching between the user space and the kernel space can be reduced to minimize the load on the CPU. However, this technology is only used to transmit small random files, and has not yet been widely used for large file transfers. This paper intends to discuss the practical application of zero-copy in processing large files via a network. To this end, we first developed a small test bed and program that can transmit and store data based on zero-copy. Afterwards, we intend to verify the usefulness of the applied technology in detail through detailed performance evaluation

The Study on the Design and Optimization of Storage for the Recording of High Speed Astronomical Data (초고속 관측 데이터 수신 및 저장을 위한 기록 시스템 설계 및 성능 최적화 연구)

  • Song, Min-Gyu;Kang, Yong-Woo;Kim, Hyo-Ryoung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.75-84
    • /
    • 2017
  • It becomes more and more more important for the storage that supports high speed recording and stable access from network environment. As one field of basic science which produces massive astronomical data, VLBI(: Very Long Baseline Interferometer) is now demanding more data writing performance and which is directly related to astronomical observation with high resolution and sensitivity. But most of existing storage are cloud model based for the high throughput of general IT, finance, and administrative service, and therefore it not the best choice for recording of big stream data. Therefore, in this study, we design storage system optimized for high performance of I/O and concurrency. To solve this problem, we implement packet read and writing module through the use of libpcap and pf_ring API on the multi core CPU environment, and build a scalable storage based on software RAID(: Redundant Array of Inexpensive Disks) for the efficient process of incoming data from external network.

Study of Optimization through Performance Analysis of Parallel Distributed Filesystem (병렬 분산파일시스템의 성능 분석을 통한 최적화 연구)

  • Yoon, JunWeon;Song, Ui-Sung
    • Journal of Digital Contents Society
    • /
    • v.17 no.5
    • /
    • pp.409-416
    • /
    • 2016
  • Recently, Big Data issue has become a buzzword and universities, industries and research institutes have been efforts to collect, analyze various data enabled. These things includes accumulated data from the past, even if it is not possible to analysis at this present immediately a which has the potential means. And we are obtained a valuable result from the collected a large amount of data via the semantic analysis. The demand for high-performance storage system that can handle large amounts of data required is increasing around the world. In addition, it must provide a distributed parallel file system that stability to multiple users too perform a variety of analyzes at the same time by connecting a large amount of the accumulated data In this study, we identify the I/O bandwidth of the storage system to be considered, and performance of the metadata in order to provide a file system in stability and propose a method for configuring the optimal environment.

Design and performance evaluation of a storage cloud service model over KREONET (KREONET 기반의 스토리지 클라우드 서비스 모델 설계 및 성능평가)

  • Hong, Wontaek;Chung, Jinwook
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.7
    • /
    • pp.29-37
    • /
    • 2017
  • Compared to the commercial networks, R&E networks have the strength such as flexible network engineering and design. Based on those features of R&E networks, we propose our storage cloud service model which supports general-purpose network users in a central region and experimental network users in distributed regions simultaneously. We prototype our service model utilizing multiple proxy controllers of OpenStack Swift service in order to deploy several regions via experimental backbone networks. Our experiments on the influence of the network latency and the size of data to be transmitted show that the bigger size of data is preferable to the smaller size of data in an experimental backbone network where the network latency increases within 10ms because the rate of throughput decline in the bigger object is comparatively small. It means that our service model is appropriate for experimental network users who directly access the service in order to move intermittently high volume of data as well as normal users in the central region who access the service frequently.

Memory Allocation and Reclamation Policies for Fast Swap Support in Mobile Systems (모바일 시스템의 고속 스왑 지원을 위한 메모리 할당 및 회수 기법)

  • Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.29-33
    • /
    • 2024
  • Recent advancements in mobile apps have led to continuously increasing memory demands on smartphone systems. Unlike desktops, which use swap functions to backup the entire memory footprint to storage when memory space is exhausted, smartphones terminate apps and lose significant context. This occurs because large-scale I/O operations to flash memory cause severe delays when swap is enabled on smartphones. This paper discusses how efficient memory management can be performed by using eMRAM, which is faster in write operations than flash memory, as the swap area in mobile systems. Considering the characteristics of backup storage (i.e., flash memory for the file system and eMRAM for the swap areas) as well as the reference characteristics of each page, we demonstrate that the proposed memory allocation and reclamation policies can improve the smartphone's I/O performance by an average of 15%.

A Method to Manage Local Storage Capacity Using Data Locality Mechanism (데이터 지역성 메커니즘을 이용한 지역 스토리지 용량 관리 방법)

  • Kim, Baul;Ku, Mino;Min, Dugki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.324-327
    • /
    • 2013
  • Recently, due to evolving cloud computing technology, we can easily and transparently utilize both local computing resource and remote computing resource in real life. Especially, enhancing smart device technologies and network infrastructures promote an increase of needs to share files between local smart devices and cloud storages. However, since smart devices have a limited storage space, storing files on cloud storage causes a starvation problem of local storage. It means that users can face a storage-lack problem even a cloud storage service provide a huge file storing space. In this research, we propose a method to manage files between smart devices and cloud storages. Our approach calculate file usage pattern based on recently used date, and then this approach determines local files being migrated. As a result, our approach is sufficient for handling data synchronization between big data storage farm and local thin client which contains limited storage space.

  • PDF

Evaluating Computational Efficiency of Spatial Analysis in Cloud Computing Platforms (클라우드 컴퓨팅 기반 공간분석의 연산 효율성 분석)

  • CHOI, Changlock;KIM, Yelin;HONG, Seong-Yun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.119-131
    • /
    • 2018
  • The increase of high-resolution spatial data and methodological developments in recent years has enabled a detailed analysis of individual experiences in space and over time. However, despite the increasing availability of data and technological advances, such individual-level analysis is not always possible in practice because of its computing requirements. To overcome this limitation, there has been a considerable amount of research on the use of high-performance, public cloud computing platforms for spatial analysis and simulation. The purpose of this paper is to empirically evaluate the efficiency and effectiveness of spatial analysis in cloud computing platforms. We compare the computing speed for calculating the measure of spatial autocorrelation and performing geographically weighted regression analysis between a local machine and spot instances on clouds. The results indicate that there could be significant improvements in terms of computing time when the analysis is performed parallel on clouds.

The Method for Data Acquisition on a Live NAS System (활성 상태의 NAS 시스템 상에서 내부 데이터 수집 기법 연구)

  • Seo, Hyeong-Min;Kim, Dohyun;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.3
    • /
    • pp.585-594
    • /
    • 2015
  • As the storage market has been expanded due to growing data size, the research on various kinds of storages such as cloud, USB, and external HDD(Hard Disk Drive) has been conducted in digital forensic aspects. NAS(Network-Attached Storage) can store the data over one TB(Tera Byte) and it is well used for private storage as well as for enterprise, but there is almost no research on NAS. This paper selects three NAS products that has the highest market share in domestic and foreign market, and suggests the process and method for data acquisition in live NAS System.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.