• Title/Summary/Keyword: 로그 블록

Search Result 66, Processing Time 0.027 seconds

Framework for Securing Accountability of Cloud Storage Data by using Blockchain Transaction (블록체인 트랜잭션을 활용한 클라우드 스토리지 데이터 책임 추적성 확보 방안 연구)

  • Park, Byeong-ju;Kwak, Jin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.326-329
    • /
    • 2017
  • ICT 기술의 발달과 함께 클라우드의 사용이 활발해지고 있으며, 클라우드의 활용성 또한 증가하고 있다. 클라우드는 각각의 활용 용도에 따라 다양한 데이터가 저장되고 있으며, 클라우드 스토리지와 클라우드 스토리지에 저장된 데이터의 중요성 또한 증가하고 있다. 또한, 클라우드를 사용하는 이용자의 수가 증가하며 CSP에 아웃소싱 되는 데이터의 양이 증가하고 있지만, 지속적으로 보안사고가 발생하고 있으며, 신뢰 되지 않는 클라우드 환경에서는 악의적 사용자 또는 CSP에 의해 데이터 액세스 로그가 위조되거나 생략이 가능해 수정 불가능한 로깅 등을 통한 책임 추적성 확보가 필요하다. 따라서 이와 같은 문제를 해결하고 클라우드 스토리지 데이터의 책임 추적성 확보를 위해 본 논문에서는 블록체인 위 변조 불가능한 특성을 활용하여 신뢰 가능한 데이터 액세스 로깅을 통해 데이터 책임 추적성 확보가 가능한 프레임워크를 제안한다.

Development on Assessment system App application of Internet Lectures using App Inventor (앱 인벤터를 활용한 인터넷 강의평가 공유 앱 어플리케이션 개발)

  • Yoon, Tae-Ung;Jang, Jin-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.515-517
    • /
    • 2015
  • 본 연구는 블록 형태의 프로그래밍 툴인 앱 인벤터를 이용해 학생들의 학업에 대한 정보와 궁금증을 해결해 줄 수 있는 인터넷 강의평가 앱 개발을 연구하였다. 앱을 개발하는 과정에서 웹 데이터베이스와 구글 퓨전 테이블을 이용하여 로그인 기능과 사용자들에게 공유하여 볼 수 있는 게시판 기능을 추가하였다. 이 기능을 통하여 사용자간에 정보를 공유하여 인터넷강의 선정에 정보제공 방법을 연구하였다.

Partial Garbage Collection Technique for Improving Write Performance of Log-Structured File Systems (부분 가비지 컬렉션을 이용한 로그 구조 파일시스템의 쓰기 성능 개선)

  • Gwak, Hyunho;Shin, Dongkun
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1026-1034
    • /
    • 2014
  • Recently, flash storages devices have become popular. Log-structured file systems (LFS) are suitable for flash storages since these can provide high write performance by only generating sequential writes to the flash device. However, LFS should perform garbage collections (GC) in order to reclaim obsolete space. Recently, a slack space recycling (SSR) technique was proposed to reduce the GC overhead. However, since SSR generates random writes, write performance can be negatively impacted if the random write performance is significantly lower than sequential write performance of the target device. This paper proposes a partial garbage collection technique that copies only a part of valid blocks in a victim segment in order to increase the size of the contiguous invalid space to be used by SSR. The experiments performed in this study show that the write performance in an SD card improves significantly as a result of the partial GC technique.

Comparison of log-logistic and generalized extreme value distributions for predicted return level of earthquake (지진 재현수준 예측에 대한 로그-로지스틱 분포와 일반화 극단값 분포의 비교)

  • Ko, Nak Gyeong;Ha, Il Do;Jang, Dae Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • Extreme value distributions have often been used for the analysis (e.g., prediction of return level) of data which are observed from natural disaster. By the extreme value theory, the block maxima asymptotically follow the generalized extreme value distribution as sample size increases; however, this may not hold in a small sample case. For solving this problem, this paper proposes the use of a log-logistic (LLG) distribution whose validity is evaluated through goodness-of-fit test and model selection. The proposed method is illustrated with data from annual maximum earthquake magnitudes of China. Here, we present the predicted return level and confidence interval according to each return period using LLG distribution.

Adaptive Garbage Collection Technique for Hybrid Flash Memory (하이브리드 플래시 메모리를 위한 적응적 가비지 컬렉션 기법)

  • Im, Soo-Jun;Shin, Dong-Kun
    • The KIPS Transactions:PartA
    • /
    • v.15A no.6
    • /
    • pp.335-344
    • /
    • 2008
  • We propose an adaptive garbage collection technique for hybrid flash memory which has both SLC and MLC. Since SLC area is fast and MLC area has low cost, the proposed scheme utilizes the SLC area as log buffer and the MLC area as data block. Considering the high write cost of MLC flash, the garbage collection for the SLC log buffer moves a page into the MLC data block only when the page is cold or the page migration invokes a small cost. The other pages are moved within the SLC log buffer. Also it adjusts the parameter values which determine the operation of garbage collection adaptively considering I/O pattern. From the experiments, we can know that the proposed scheme provides better performance compared with the previous flash management schemes for the hybrid flash and finds the parameter values of garbage collection close to the optimal values.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Tracking of cryptocurrency moved through blockchain Bridge (블록체인 브릿지를 통해 이동한 가상자산의 추적 및 검증)

  • Donghyun Ha;Taeshik Shon
    • Journal of Platform Technology
    • /
    • v.11 no.3
    • /
    • pp.32-44
    • /
    • 2023
  • A blockchain bridge (hereinafter referred to as "bridge") is a service that enables the transfer of assets between blockchains. A bridge accepts virtual assets from users and delivers the same virtual assets to users on other blockchains. Users use bridges because they cannot transfer assets to other blockchains in the usual way because each blockchain environment is independent. Therefore, the movement of assets through bridges is not traceable in the usual way. If a malicious actor moves funds through a bridge, existing asset tracking tools are limited in their ability to trace it. Therefore, this paper proposes a method to obtain information on bridge usage by identifying the structure of the bridge and analyzing the event logs of bridge requests. First, to understand the structure of bridges, we analyzed bridges operating on Ethereum Virtual Machine(EVM) based blockchains. Based on the analysis, we applied the method to arbitrary bridge events. Furthermore, we created an automated tool that continuously collects and stores bridge usage information so that it can be used for actual tracking. We also validated the automated tool and tracking method based on an asset transfer scenario. By extracting the usage information through the tool after using the bridge, we were able to check important information for tracking, such as the sending blockchain, the receiving blockchain, the receiving wallet address, and the type and quantity of tokens transferred. This showed that it is possible to overcome the limitations of tracking asset movements using blockchain bridges.

  • PDF

EAST: An Efficient and Advanced Space-management Technique for Flash Memory using Reallocation Blocks (재할당 블록을 이용한 플래시 메모리를 위한 효율적인 공간 관리 기법)

  • Kwon, Se-Jin;Chung, Tae-Sun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.476-487
    • /
    • 2007
  • Flash memory offers attractive features, such as non-volatile, shock resistance, fast access, and low power consumption for data storage. However, it has one main drawback of requiring an erase before updating the contents. Furthermore, flash memory can only be erased limited number of times. To overcome limitations, flash memory needs a software layer called flash translation layer (FTL). The basic function of FTL is to translate the logical address from the file system like file allocation table (FAT) to the physical address in flash memory. In this paper, a new FTL algorithm called an efficient and advanced space-management technique (EAST) is proposed. EAST improves the performance by optimizing the number of log blocks, by applying the state transition, and by using reallocation blocks. The results of experiments show that EAST outperforms FAST, which is an enhanced log block scheme, particularly when the usage of flash memory is not full.

Cubical User Interface for Toy Block Composition in Augmented Reality (증강 현실에서의 장난감 블록 결합을 위한 큐브형 사용자 인터페이스)

  • Lee, Hyeong-Mook;Lee, Young-Ho;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.363-367
    • /
    • 2009
  • We propose Cubical User Interface(CUI) for toy block composition in Augmented Reality. The creation of new object by composing virtual object is able to construct various AR contents effectively. However, existing GUI method requires learning time or is lacking of intuitiveness between act of user and offered interface. In case of AR interfaces, they mainly have been supported one handed operation and it did not consider composition property well. Therefore, the CUI provide tangible cube as the manipulation tool for virtual toy block composition in AR. The tangible cube which is attached multi-markers, magnets, and buttons supports free rotation, combination, and button input. Also, we propose two kinds of two-handed composing interactions based on CUI. First is Screw Driving(SD) method which is possible to free 3-D positioning and second is Block Assembly(BA) method which support visual guidance and is fast and intuitive. We expected that proposed interface can apply as the authoring system for content such as education, entertainment, Digilogbook.

  • PDF

A Segment Algorithm for Extracting Item Blocks based on Mobile Devices in the Web Contents (웹 콘텐츠에서 모바일 디바이스 기반 아이템 블록을 추출하기 위한 세그먼트 알고리즘)

  • Kim, Su-Do;Park, Tae-Jin;Park, Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.3
    • /
    • pp.427-435
    • /
    • 2009
  • Users are able to search and read interesting items and hence click hyperlink linked to the item which is detailed content unit such as menu, login, news, video, etc. Small screen like mobile device is very difficult to viewing all web contents at once. Browsing and searching for interesting items by scrolling to left and right or up and down is discomfort to users in small screen. Searching and displaying directly the item preferred by users can reduces difficulty of interface manipulation of mobile device. To archive it, web contents based on desktop will be segmented on a per-item basis which component unit of web contents. Most segment algorithms are based on segment method through analysis of HTML code or mobile size. However, it is difficult to extract item blocks. Because present web content is getting more complicated and diversified in structure and content like web portal services. A web content segment algorithm suggested in this paper is based on extracting item blocks is component units of web contents.

  • PDF