• Title/Summary/Keyword: Log-based file system

Search Result 57, Processing Time 0.031 seconds

A Technique to Enhance Performance of Log-based Flash Memory File Systems (로그기반 플래시 메모리 파일 시스템 성능 향상 기법)

  • Ryu, Junkil;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.2 no.3
    • /
    • pp.184-193
    • /
    • 2007
  • Flash memory adoption in the mobile devices is increasing or vanous multimedia services such as audio, videos, and games. Although the traditional research issues such as out-place update, garbage collection, and wear-leveling are important, the performance, memory usage, and fast mount issues of flash memory file system are becoming much more important than ever because flash memory capacity is rapidly increasing. In this paper, we address the problems of the existing log-based flash memory file systems analytically and propose an efficient log-based file system, which produces higher performance, less memory usage and mount time than the existing log-based file systems. Our ideas are applied to a well-known log-based flash memory file system (YAFFS2) and the performance tests are conducted by comparing our prototype with YAFFS2. The experimental results show that our prototype achieves higher performance, less system memory usage, and faster mounting than YAFFS2, which is better than JFFS2.

  • PDF

HFAT: Log-Based FAT File System Using Dynamic Allocation Method

  • Kim, Nam Ho;Yu, Yun Seop
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.4
    • /
    • pp.405-410
    • /
    • 2012
  • Several attempts have been made to add journaling capability to a traditional file allocation table (FAT) file system. However, they encountered issues such as excessive system load or instability of the journaling data itself. If journaling data is saved as a file format, it can be corrupted by a user application. However, if journaling data is saved in a fixed area such as a reserved area, the storage can be physically corrupted because of excessive system load. To solve this problem, a new method that dynamically allocates journaling data is introduced. In this method, the journaling data is not saved as a file format. Using a reserved area and reserved FAT status entry of the FAT file system specification, the journaling data can be dynamically allocated and cannot be accessed by user applications. The experimental results show that this method is more stable and scalable than other log-based FAT file systems. HFAT was tested with more than 12,000 power failures and was stable.

FUSE-based Syslog Agent for File Access Log (파일 접근 로그를 위한 FUSE 기반의 Syslog 에이전트)

  • Son, Tae-Yeong;Rim, Seong-Rak
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.623-628
    • /
    • 2016
  • Because the log information provides some critical clues for solving the problem of illegal system access, it is very important for a system administrator to gather and analyze the log data. In a Linux system, the syslog utility has been used to gather various kinds of log data. Unfortunately, there is a limitation that a system administrator should rely on the services only provided by the syslog utility. To overcome this limitation, this paper suggests a syslog agent that allows the system administrator to gather log information for file access that is not serviced by syslog utility. The basic concept of the suggested syslog agent is that after creating a FUSE, it stores the accessed information of the files under the directory on which FUSE has been mounted into the log file via syslog utility. To review its functional validity, a FUSE file system was implemented on Linux (Ubunt 14.04), and the log information of a file access was collected and confirmed.

Improving Log-Structured File System Performance by Utilizing Non-Volatile Memory (비휘발성 메모리를 이용한 로그 구조 파일 시스템의 성능 향상)

  • Kang, Yang-Wook;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.537-541
    • /
    • 2008
  • Log-Structured File System(LFS) is a disk based file system that is optimized for improving the write performance. LFS gathers dirty data in memory as long as possible, and flushes all dirty data sequentially at once. In a real system, however, maintaining dirty data in memory should be flushed into a disk to meet file system consistency issues even if more memory is still available. This synchronizations increase the cleaner overhead of LFS and make LFS to write down more metadata into a disk. In this paper, by adapting Non-volatile RAM(NV-RAM) we modifies LFS and virtual memory subsystem to guarantee that LFS could gather enough dirty data in the memory and reduce small disk writes. By doing so, we improves the performance of LFS by around 2.5 times than the original LFS.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

HBase based Business Process Event Log Schema Design of Hadoop Framework

  • Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.49-55
    • /
    • 2019
  • Organizations design and operate business process models to achieve their goals efficiently and systematically. With the advancement of IT technology, the number of items that computer systems can participate in and the process becomes huge and complicated. This phenomenon created a more complex and subdivide flow of business process.The process instances that contain workcase and events are larger and have more data. This is an essential resource for process mining and is used directly in model discovery, analysis, and improvement of processes. This event log is getting bigger and broader, which leads to problems such as capacity management and I / O load in management of existing row level program or management through a relational database. In this paper, as the event log becomes big data, we have found the problem of management limit based on the existing original file or relational database. Design and apply schemes to archive and analyze large event logs through Hadoop, an open source distributed file system, and HBase, a NoSQL database system.

Study on Windows Event Log-Based Corporate Security Audit and Malware Detection (윈도우 이벤트 로그 기반 기업 보안 감사 및 악성코드 행위 탐지 연구)

  • Kang, Serim;Kim, Soram;Park, Myungseo;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.3
    • /
    • pp.591-603
    • /
    • 2018
  • Windows Event Log is a format that records system log in Windows operating system and methodically manages information about system operation. An event can be caused by system itself or by user's specific actions, and some event logs can be used for corporate security audits, malware detection and so on. In this paper, we choose actions related to corporate security audit and malware detection (External storage connection, Application install, Shared folder usage, Printer usage, Remote connection/disconnection, File/Registry manipulation, Process creation, DNS query, Windows service, PC startup/shutdown, Log on/off, Power saving mode, Network connection/disconnection, Event log deletion and System time change), which can be detected through event log analysis and classify event IDs that occur in each situation. Also, the existing event log tools only include functions related to the EVTX file parse and it is difficult to track user's behavior when used in a forensic investigation. So we implemented new analysis tool in this study which parses EVTX files and user behaviors.

Real time predictive analytic system design and implementation using Bigdata-log (빅데이터 로그를 이용한 실시간 예측분석시스템 설계 및 구현)

  • Lee, Sang-jun;Lee, Dong-hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1399-1410
    • /
    • 2015
  • Gartner is requiring companies to considerably change their survival paradigms insisting that companies need to understand and provide again the upcoming era of data competition. With the revealing of successful business cases through statistic algorithm-based predictive analytics, also, the conversion into preemptive countermeasure through predictive analysis from follow-up action through data analysis in the past is becoming a necessity of leading enterprises. This trend is influencing security analysis and log analysis and in reality, the cases regarding the application of the big data analysis framework to large-scale log analysis and intelligent and long-term security analysis are being reported file by file. But all the functions and techniques required for a big data log analysis system cannot be accommodated in a Hadoop-based big data platform, so independent platform-based big data log analysis products are still being provided to the market. This paper aims to suggest a framework, which is equipped with a real-time and non-real-time predictive analysis engine for these independent big data log analysis systems and can cope with cyber attack preemptively.

A Comparison of Data Extraction Techniques and an Implementation of Data Extraction Technique using Index DB -S Bank Case- (원천 시스템 환경을 고려한 데이터 추출 방식의 비교 및 Index DB를 이용한 추출 방식의 구현 -ㅅ 은행 사례를 중심으로-)

  • 김기운
    • Korean Management Science Review
    • /
    • v.20 no.2
    • /
    • pp.1-16
    • /
    • 2003
  • Previous research on data extraction and integration for data warehousing has concentrated mainly on the relational DBMS or partly on the object-oriented DBMS. Mostly, it describes issues related with the change data (deltas) capture and the incremental update by using the triggering technique of active database systems. But, little attention has been paid to data extraction approaches from other types of source systems like hierarchical DBMS, etc. and from source systems without triggering capability. This paper argues, from the practical point of view, that we need to consider not only the types of information sources and capabilities of ETT tools but also other factors of source systems such as operational characteristics (i.e., whether they support DBMS log, user log or no log, timestamp), and DBMS characteristics (i.e., whether they have the triggering capability or not, etc), in order to find out appropriate data extraction techniques that could be applied to different source systems. Having applied several different data extraction techniques (e.g., DBMS log, user log, triggering, timestamp-based extraction, file comparison) to S bank's source systems (e.g., IMS, DB2, ORACLE, and SAM file), we discovered that data extraction techniques available in a commercial ETT tool do not completely support data extraction from the DBMS log of IMS system. For such IMS systems, a new date extraction technique is proposed which first creates Index database and then updates the data warehouse using the Index database. We illustrates this technique using an example application.

Design and Implementation of Intrusion Detection System of Packet Reduction Method (패킷 리덕션 방식의 침입탐지 시스템 설계 및 구현)

  • JUNG, Shin-Il;KIM, Bong-Je;KIM, Chang-Soo
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.17 no.2
    • /
    • pp.270-280
    • /
    • 2005
  • Many researchers have proposed the various methods to detect illegal intrusion in order to improve internet environment. Among these researches, IDS(Intrusion Detection System) is classified the most common model to protect network security. In this paper, we propose new log format instead of Apache log format for SSL integrity verification. We translate file-DB log format into R-DB log format. Using these methods we can manage Web server's integrity, and log data is transmitted verification system to be able to perform both primary function of IDS and Web server's integrity management at the same time. The proposed system in this paper is also able to use for wire and wireless environment based on PDA.