• Title/Summary/Keyword: 파일 스토리지

Search Result 133, Processing Time 0.029 seconds

An Efficient Integrity Auditing System for Cloud Storage (클라우드 스토리지를 위한 효율적인 데이터 검증 시스템)

  • Son, Junggab;Hussain, Rasheed;Oh, Heekuck
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.835-838
    • /
    • 2013
  • 클라우드 컴퓨팅을 사용하면 컴퓨팅 자원을 구축하는 비용을 절감할 수 있다는 장점이 있다. 문제는 클라이언트가 데이터 센터와 서비스제공자를 완전히 신뢰할 수 없다는 것이다. 예를 들어, 클라우드에 저장된 파일이 손실되었을 때 서비스 제공자는 서비스의 신뢰도가 떨어지는 것을 막기 위해 이를 숨길 수 있다. 이때, 데이터가 저장 후에 손실되었다는 것을 증명하지 못하면, 그 피해는 클라이언트에게 돌아오게 된다. 따라서, 클라이언트의 데이터를 보호하기 위하여 무결성을 검증할 수 있는 적절한 기법을 적용하여야 한다. 기존 연구로는 homomorphic tags 기반의 기법들이 많이 제안되었으나 이 기법은 많은 지수연산을 필요로 하므로 상용화할 수 있을 만큼의 효율성을 가지지 못한다. 특히, 클라이언트가 증거 생성을 위해 많은 연산을 부담해야 한다. 본 논문에서는 효율성에 중점을 둔, 특히 클라이언트의 효율성에 중점을 둔 무결성 검증 기법을 제안한다. 제안하는 기법은 Modular arithmetic을 기반으로 설계되었으며, 무결성 검증뿐만 아니라 데이터가 자주 업데이트 되는 환경을 지원한다. Simulation result는 제안하는 기법이 기존 기법에 매우 효율적임을 보여준다.

Data Deduplication Method using Locality-based Chunking policy for SSD-based Server Storages (SSD 기반 서버급 스토리지를 위한 지역성 기반 청킹 정책을 이용한 데이터 중복 제거 기법)

  • Lee, Seung-Kyu;Kim, Ju-Kyeong;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.143-151
    • /
    • 2013
  • NAND flash-based SSDs (Solid State Drive) have advantages of fast input/output performance and low power consumption so that they could be widely used as storages on tablet, desktop PC, smart-phone, and server. But, SSD has the disadvantage of wear-leveling due to increase of the number of writes. In order to improve the lifespan of the SSD, a variety of data deduplication techniques have been introduced. General fixed-size splitting method allocates fixed size of chunk without considering locality of data so that it may execute unnecessary chunking and hash key generation, and variable-size splitting method occurs excessive operation since it compares data byte-by-byte for deduplication. This paper proposes adaptive chunking method based on application locality and file name locality of written data in SSD-based server storage. The proposed method split data into 4KB or 64KB chunks adaptively according to application locality and file name locality of duplicated data so that it can reduce the overhead of chunking and hash key generation and prevent duplicated data writing. The experimental results show that the proposed method can enhance write performance, reduce power consumption and operation time compared to existing variable-size splitting method and fixed size splitting method using 4KB.

Open Platform for Improvement of e-Health Accessibility (의료정보서비스 접근성 향상을 위한 개방형 플랫폼 구축방안)

  • Lee, Hyun-Jik;Kim, Yoon-Ho
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1341-1346
    • /
    • 2017
  • In this paper, we designed the open service platform based on integrated type of individual customized service and intelligent information technology with individual's complex attributes and requests. First, the data collection phase is proceed quickly and accurately to repeat extraction, transformation and loading. The generated data from extraction-transformation-loading process module is stored in the distributed data system. The data analysis phase is generated a variety of patterns that used the analysis algorithm in the field. The data processing phase is used distributed parallel processing to improve performance. The data providing should operate independently on device-specific management platform. It provides a type of the Open API.

A Novel Auditing System for Dynamic Data Integrity in Cloud Computing (클라우드 컴퓨팅에서 동적 데이터 무결성을 위한 개선된 감사 시스템)

  • Kim, Tae-yeon;Cho, Gi-hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.8
    • /
    • pp.1818-1824
    • /
    • 2015
  • Cloud computing draws attention as an application to provide dynamically scalable infrastructure for application, data and file storage. An untrusted remote server can cause a variety of problems in the field of data protection. It may process intentionally or involuntarily user's data operations(modify, insert, delete) without user's permission. It may provide false information in order to hide his mistakes in the auditing process. Therefore, it is necessary to audit the integrity of data stored in the cloud server. In this paper, we propose a new data auditing system that can verify whether servers had a malicious behavior or not. Performance and security analysis have proven that our scheme is suitable for cloud computing environments in terms of performance and security aspects.

Storage I/O Subsystem for Guaranteeing Atomic Write in Database Systems (데이터베이스 시스템의 원자성 쓰기 보장을 위한 스토리지 I/O 서브시스템)

  • Han, Kyuhwa;Shin, Dongkun;Kim, Yongserk
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.169-176
    • /
    • 2015
  • The atomic write technique is a good solution to solve the problem of the double write buffer. The atomic write technique needs modified I/O subsystems (i.e., file system and I/O schedulers) and a special SSD that guarantees the atomicity of the write request. In this paper, we propose the writing unit aligned block allocation technique (for EXT4 file system) and the merge prevention of requests technique for the CFQ scheduler. We also propose an atomic write-supporting SSD which stores the atomicity information in the spare area of the flash memory page. We evaluate the performance of the proposed atomic write scheme in MariaDB using the tpcc-mysql and SysBench benchmarks. The experimental results show that the proposed atomic write technique shows a performance improvement of 1.4~1.5 times compared to the double write buffer technique.

Log processing using messaging system in SSD Storage Tester (SSD Storage Tester에서 메시징 시스템을 이용한 로그 처리)

  • Nam, Ki-ahn;Kwon, Oh-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1531-1539
    • /
    • 2017
  • The existing SSD storage tester processed logs in a 1-N structure between server and client using TCP and network file system. This method causes some problems for example, an increase in CPU usage and difficulty in exception handling, etc. In this paper, we implement a log processing message layer that can deal with asynchronous distributed processing using open source messaging system such as kafka, RabbitMQ and compare this layer with existing log transmission method. A log simulator was implemented to compare the transmission bandwidth and CPU usage. Test results show that the transmission using the message layer has higher performance than the transmission using the message layer, and the CPU usage does not show any significant difference The message layer can be implemented more easily than the conventional method and the efficiency is higher than that of the conventional method.

Database Reverse Engineering Using Master Data in Microservice Architecture (마스터 데이터를 활용한 마이크로 서비스 아키텍처에서의 데이터베이스 리버스 엔지니어링)

  • Shin, Kwang-chul;Lee, Choon Y.
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.5
    • /
    • pp.523-532
    • /
    • 2019
  • Microservice architecture focuses on dividing it into small and lightweight services to build for the purpose of performing very close business functions. So it tends to concentrate only on agility, productivity, reliability, and ease of deployment of software development. Microservice architecture considers database as just a file or storage for storing and extracting data, emphasizing that data quality can be sacrificed for convenience and scalability of software development. Database reverse engineering for understanding database structure and data semantics is needed for data utilization for business decision making. However, it is difficult that reverse database engineering is applied in microservice architecture that neglects data quality. This study proposes database reverse engineering method that utilizes master data to restore the conceptual data model as a solution. The proposed method is applied to the return service database implemented by microservice architecture and verified its applicability.

Automatic Generation of Diverse Cartoons using User's Profiles and Cartoon Features (사용자 프로파일 및 만화 요소를 활용한 다양한 만화 자동 생성)

  • Song, In-Jee;Jung, Myung-Chul;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.5
    • /
    • pp.465-475
    • /
    • 2007
  • With the spread of Internet, web users express their daily life by articles, pictures and cartons to recollect personal memory or to share their experience. For the easier recollection and sharing process, this paper proposes diverse cartoon generation methods using the landmark lists which represent the behavior and emotional status of the user. From the priority and causality of each landmark, critical landmark is selected for composing the cartoon scenario, which is revised by story ontology. Using similarity between cartoon images and each landmark in the revised scenario, suitable cartoon cut for each landmark is composed. To make cartoon story more diverse, weather, nightscape, supporting character, exaggeration and animation effects are additionally applied. Through example scenarios and usability tests, the diversity of the generated cartoon is verified.

An LDPC Code Replication Scheme Suitable for Cloud Computing (클라우드 컴퓨팅에 적합한 LDPC 부호 복제 기법)

  • Kim, Se-Hoe;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.134-142
    • /
    • 2012
  • This paper analyze an LDPC code replication method suitable for cloud computing. First, we determine the number of blocks suitable for cloud computing through analysis of the performance for the file availability and storage overhead. Also we determine the type of LDPC code appropriate for cloud computing through the performance for three types of LDPC codes. Finally we present the graph random generation method and the comparing method of each generated LDPC code's performance by the iterative decoding process. By the simulation, we confirmed the best graph's regularity is left-regular or least left-regular. Also, we confirmed the best graph's total number of edges are minimum value or near the minimum value.

A Study on Methodology of Media Contents Automatically Collect and Transform based IP (IP 기반 미디어 콘텐츠 자동 수집 및 변환 기법 연구)

  • Kim, Sang-Soo;Park, Koo-Rack;Kim, Dong-Hyun
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.287-295
    • /
    • 2015
  • The IPTV service has to be converted into an unified media format that fits for a variety of terminal equipments in terms of the bulk high-capacity media contents, and is spending a lot of time in the conversion time of contents including the process of collecting the media contents and extracting the information for conversion. In order to solve the problem, this paper designed the database in accordance with the automatic collection of time, and proposed a system that could increase the productivity of the contents through the automation process of the entire process using the media server and the transcoder. The media server collected contents and extracted information automatically with respect to the contents servers placed in specific locations and the media files of the storage whereas the transcoder conducted the automatic upload of the converted results to a specific server through the process of automatic conversion. As a result, the various convergence through compared to existing conversion method could minimized unnecessary waste of time.