• Title/Summary/Keyword: log file

Search Result 161, Processing Time 0.024 seconds

Implementation of A Log-Based Intrusion Recovery Module for Linux File System (리눅스 파일시스템을 위한 로그 기반 침입 복구 모듈의 구현)

  • 이재국;김형식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.244-246
    • /
    • 2004
  • 사용자는 침입이 있더라도 항상 신뢰성 있는 정보를 획득하길 원하기 때문에 침입에 의하여 파일이 훼손되는 경우에도 사용자에게 투명한 방법으로 복구할 수 있는 방법이 필요하다. 본 논문에서는 리눅스 기반의 파일 시스템에서 변경이 일어날 때마다 로그 형태로 저장된 로그 파일을 이용하여 침입에 의하여 훼손된 부분을 복구하기 위한 모듈을 구현하고, 시험을 통하여 로그 기반 침입 복구 모듈을 적재한 시스템에서 로그를 관리하기 위해 발생하는 오버헤드를 분석한다.

  • PDF

A Log-Based Intrusion Recovery Scheme for the Linux File System (리눅스 파일시스템에서의 로그 기반 침입 복구 기법)

  • 이재국;김형식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.413-415
    • /
    • 2003
  • 일반 사용자는 침입이 있더라도 항상 신뢰성 있는 정보를 획득하길 원하기 때문에 침입에 의하여 파일이 훼손되는 경우에도 사용자에게 투명한 방법으로 복구할 수 있는 방법이 필요하다. 본 논문에서는 리눅스 기반의 파일 시스템에서 변경이 일어날 때마다 로그 형태로 저장한 중복 파일을 이용하여 침입에 의하여 훼손된 부분을 복구하기 위한 기법을 제안한다. 실제 파일 변경이 일어날 때에는 그 변경이 악의적인지 합법적인지 판단이 불가능하다는 점에서 변경이 일어날 때마다 로그 형태로 변경정보를 유지하고 복구가 필요할 경우 선택적으로 적용할 수 있도록 하였다. 또한 로그 크기의 계속적인 증가에 효과적으로 대처하기 위한 로그 컴팩션(compaction) 방법도 보인다.

  • PDF

Study on Recovery Techniques for the Deleted or Damaged Event Log(EVTX) Files (삭제되거나 손상된 이벤트 로그(EVTX) 파일 복구 기술에 대한 연구)

  • Shin, Yonghak;Cheon, Junyoung;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.2
    • /
    • pp.387-396
    • /
    • 2016
  • As the number of people using digital devices has increased, the digital forensic, which aims at finding clues for crimes in digital data, has been developed and become more important especially in court. Together with the development of the digital forensic, the anti-forensic which aims at thwarting the digital forensic has also been developed. As an example, with anti-forensic technology the criminal would delete an digital evidence without which the investigator would be hard to find any clue for crimes. In such a case, recovery techniques on deleted or damaged information will be very important in the field of digital forensic. Until now, even though EVTX(event log)-based recovery techniques on deleted files have been presented, but there has been no study to retrieve event log data itself, In this paper, we propose some recovery algorithms on deleted or damaged event log file and show that our recovery algorithms have high success rate through experiments.

Log processing using messaging system in SSD Storage Tester (SSD Storage Tester에서 메시징 시스템을 이용한 로그 처리)

  • Nam, Ki-ahn;Kwon, Oh-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1531-1539
    • /
    • 2017
  • The existing SSD storage tester processed logs in a 1-N structure between server and client using TCP and network file system. This method causes some problems for example, an increase in CPU usage and difficulty in exception handling, etc. In this paper, we implement a log processing message layer that can deal with asynchronous distributed processing using open source messaging system such as kafka, RabbitMQ and compare this layer with existing log transmission method. A log simulator was implemented to compare the transmission bandwidth and CPU usage. Test results show that the transmission using the message layer has higher performance than the transmission using the message layer, and the CPU usage does not show any significant difference The message layer can be implemented more easily than the conventional method and the efficiency is higher than that of the conventional method.

Consideration of fsync() of the Ext4 File System According to Kernel Version (커널 버전 별 Ext4 파일 시스템의 fsync()에 대한 고찰)

  • Son, Seongbae;Noh, Yoenjin;Lee, Dokeun;Park, Sungsoon;Won, Youjip
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.363-373
    • /
    • 2017
  • Ext4 file system is widely used in various computing environments such as those of the PC, the server, and the Linux-based embedded system. Ext4, which uses a buffer for block I/O, provides fsync() system call to applications to guarantee the consistency of a specific file. A log of the analytical studies regarding the operation of Ext4 and the improvement of its performance has been compiled, but it has not been studied in detail in terms of kernel versions. We figure out that the behavior of fsync() system call is different depending on the kernel version. Between the kernel versions of 3.4.0 and 4.7.2, 3.4.0, 3.8.0, and 4.6.2 showed behavioral differences regarding the fsync() system call. The latency of fsync() in kernel 3.4.0 is longer than that of the more-advanced 3.7.10; meanwhile, the characteristics of 3.8.0 enabled the disruption of the Ext4 journaling order, but the ordered defect was solved with 4.6.2.

Analyzing Past User History through Recovering Deleted $UsnJrnl file (삭제된 $UsnJrnl 파일 복구를 통한 과거 사용자 행위 확인)

  • Kim, Dong-Geon;Park, Seok-Hyeon;Jo, Ohyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.5
    • /
    • pp.23-29
    • /
    • 2020
  • These days, digital forensic technologies are being used frequently at crime scenes. There are various electronic devices at the scene of the crime, and digital forensic results of these devices are used as important evidence. In particular, the user's action and the time when the action took place are critical. But there are many limitations for use in real forensics analyses because of the short cycle in which user actions are recorded. This paper proposed an efficient method for recovering deleted user behavior records and applying them to forensics investigations, then the proposed method is compared with previous methods. Although there are difference in recovery result depending on the storage, the results have been identified that the amount of user history data is increased from a minimum of 6% to a maximum of 539% when recovered user behavior was utilized to forensics investigation.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

MOdel-based KERnel Testing (MOKERT) Framework (모델기반의 커널 테스팅 프레이뭐크)

  • Kim, Moon-Zoo;Hong, Shin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.7
    • /
    • pp.523-530
    • /
    • 2009
  • Despite the growing need for customized operating system kernels for embedded devices, kernel development continues to suffer from insufficient reliability and high testing cost for several reasons such as the high complexity of the kernel code. To alleviate these difficulties, this study proposes the MOdel-based KERnel Testing (MOKERT) framework for detection of concurrency bugs in the kernel. MOKERT translates a given C program into a corresponding Promela model, and then tries to find a counter example with regard to a given requirement property, If found, MOKERT executes that counter example on the real kernel code to check whether the counter example is a false alarm or not, The MOKERT framework was applied to the Linux proc file system and confirmed that the bug reported in a ChangeLog actually caused a data race problem, In addition, a new data race bug in the Linux proc file system was found, which causes kernel panic.

Web Caching Strategy based on Documents Popularity (선호도 기반 웹 캐싱 전략)

  • Yoo, Hae-Young;Park, Chel
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.530-538
    • /
    • 2002
  • In this paper, we propose a new caching strategy for web servers. The proposed algorithm collects on]y the statistics of the requested file, for example the popularity, when a request arrives. And, at times, only files with higher popularity are cached all together. Because the cache remains unchanged until the cache is made newly, web server can use very efficient data structure for cache to determine whether a file is in the cache or not. This increases greatly tile efficiency of cache manipulation. Furthermore, the experiment that is performed with real log files built by web servers shows that the cache hit ratio and the cache hit ratio are better than those produced by LRU. The proposed algorithm has a drawback such that the cache hit ratio may decrease when the popularity of files that is not in the cache explodes instantaneously. But in our opinion, such explosion happens infrequently, and it is easy to implement the web servers to adapt them to such unusual cases.

Data Consistency-Control Scheme Using a Rollback-Recovery Mechanism for Storage Class Memory (스토리지 클래스 메모리를 위한 롤백-복구 방식의 데이터 일관성 유지 기법)

  • Lee, Hyun Ku;Kim, Junghoon;Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.7-14
    • /
    • 2015
  • Storage Class Memory(SCM) has been considered as a next-generation storage device because it has positive advantages to be used both as a memory and storage. However, there are significant problems of data consistency in recently proposed file systems for SCM such as insufficient data consistency or excessive data consistency-control overhead. This paper proposes a novel data consistency-control scheme, which changes the write mode for log data depending on the modified data ratio in a block, using a rollback-recovery scheme instead of the Write Ahead Logging (WAL) scheme. The proposed scheme reduces the log data size and the synchronization cost for data consistency. In order to evaluate the proposed scheme, we implemented our scheme on a Linux 3.10.2-based system and measured its performance. The experimental results show that our scheme enhances the write throughput by 9 times on average when compared to the legacy data consistency control scheme.