• Title/Summary/Keyword: Log System

Search Result 1,503, Processing Time 0.041 seconds

A Study on the Improvement of Information Service Using Information System Log Analysis (정보 시스템 이용기록 분석을 통한 정보 서비스 개선방안 연구)

  • Jho, Jae-Hyeong
    • Journal of Information Management
    • /
    • v.36 no.4
    • /
    • pp.137-153
    • /
    • 2005
  • For the improvement of information service, users' transaction log can be stored to the system, and the log analysis should be included in the process of service improvement. Also there are differences within kinds of users' log records and methods of analysis according to the institution's strategy. This paper describes the kinds of log records from users' behavior on information system. And its goal is to consider the case of information center which operates log analysis, and to derive a plan for improvement of information services.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

An Efficient Log Data Processing Architecture for Internet Cloud Environments

  • Kim, Julie;Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.8 no.1
    • /
    • pp.33-41
    • /
    • 2016
  • Big data management is becoming an increasingly important issue in both industry and academia of information science community today. One of the important categories of big data generated from software systems is log data. Log data is generally used for better services in various service providers and can also be used to improve system reliability. In this paper, we propose a novel big data management architecture specialized for log data. The proposed architecture provides a scalable log management system that consists of client and server side modules for efficient handling of log data. To support large and simultaneous log data from multiple clients, we adopt the Hadoop infrastructure in the server-side file system for storing and managing log data efficiently. We implement the proposed architecture to support various client environments and validate the efficiency through measurement studies. The results show that the proposed architecture performs better than the existing logging architecture by 42.8% on average. All components of the proposed architecture are implemented based on open source software and the developed prototypes are now publicly available.

A EM-Log Aided Navigation Filter Design for Maritime Environment (해상환경용 EM-Log 보정항법 필터 설계)

  • Jo, Minsu
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.198-204
    • /
    • 2020
  • This paper designs a electromagnetic-log (EM-Log) aided navigation filter for maritime environment without global navigation satellite system (GNSS). When navigation is performed for a long time, Inertial navigation system (INS)'s error gradually diverges. Therefore, an integrated navigation method is used to solve this problem. EM-Log sensor measures the velocity of the vehicle. However, since the measured velocity from EM-Log contains the speed of the sea current, the aided navigation filter is required to estimate the sea current. This paper proposes a single model filter and interacting multiple (IMM) model filter methods to estimate the sea current and analyzes the influence of the sea current model on the filter. The performance of the designed aided navigation filter is verified using a simulation and the improvement rate of the filter compared to the pure navigation is analyzed. The performance of single model filter is improved when the sea current model is correct. However, when the sea current model is incorrect, the performance decreases. On the other hands, IMM model filter methods show the stable performance compared to the single model.

Security Audit System for Secure Router

  • Doo, So-Young;Kim, Ki-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1602-1605
    • /
    • 2005
  • An audit tracer is one of the last ways to defend an attack for network equipments. Firewall and IDS which block off an attack in advance are active way and audit tracing is passive way which analogizes a type and a situation of an attack from log after an attack. This paper explains importance of audit trace function in network equipment for security and defines events which we must leave by security audit log. We design and implement security audit system for secure router. This paper explains the reason why we separate general audit log and security audit log.

  • PDF

A Stability Verification of Backup System for Disaster Recovery (재해 복구를 위한 백업 시스템의 안정성 검증)

  • Lee, Moon-Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.205-214
    • /
    • 2012
  • The main thing that IT operation managers consider is protecting assets of corporation from system failure and disaster. Therefore, this research proposed a backup system for a disaster recovery. Previous backup method is that if database update occurs, this record is saved in redo log, and if the size of record file is over than expected, this file is saved in archive log in order. Thus, it is possible to occur errors of data loss from the process of data backup which change in real time while changes of database occur. Suggested backup system is back redo log up to database of transaction log in real time, and back a record that can be omitted from previous backup method up to archive log. When recover the data, it is possible to recover redo log in real time online, and it minimizes data loss. Also, throughout multi thread processing method data recovery is performed and it is designed that system performance is improved. To verify stability of backup system CPN(Coloured Petri Net) is introduced, and each step of backup system is displayed in diagram form, and th e stability is verified based on the definition and theorem of CPN.

XML-based Windows Event Log Forensic tool design and implementation (XML기반 Windows Event Log Forensic 도구 설계 및 구현)

  • Kim, Jongmin;Lee, DongHwi
    • Convergence Security Journal
    • /
    • v.20 no.5
    • /
    • pp.27-32
    • /
    • 2020
  • The Windows Event Log is a Log that defines the overall behavior of the system, and these files contain data that can detect various user behaviors and signs of anomalies. However, since the Event Log is generated for each action, it takes a considerable amount of time to analyze the log. Therefore, in this study, we designed and implemented an XML-based Event Log analysis tool based on the main Event Log list of "Spotting the Adversary with Windows Event Log Monitoring" presented at the NSA.

A Digital Forensic Method for File Creation using Journal File of NTFS File System (NTFS 파일 시스템의 저널 파일을 이용한 파일 생성에 대한 디지털 포렌식 방법)

  • Kim, Tae Han;Cho, Gyu Sang
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.2
    • /
    • pp.107-118
    • /
    • 2010
  • This paper proposes a digital forensic method to a file creation transaction using a journal file($LogFile) on NTFS File System. The journal file contains lots of information which can help recovering the file system when system failure happens, so knowledge of the structure is very helpful for a forensic analysis. The structure of the journal file, however, is not officially opened. We find out the journal file structure with analyzing the structure of log records by using reverse engineering. We show the digital forensic procedure extracting information from the log records of a sample file created on a NTFS volume. The related log records are as follows: bitmap and segment allocation information of MFT entry, index entry allocation information, resident value update information($FILE_NAME, $STANDARD_INFORMATION, and INDEX_ALLOCATION attribute etc.).

Design and Implementation of Intrusion Detection System of Packet Reduction Method (패킷 리덕션 방식의 침입탐지 시스템 설계 및 구현)

  • JUNG, Shin-Il;KIM, Bong-Je;KIM, Chang-Soo
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.17 no.2
    • /
    • pp.270-280
    • /
    • 2005
  • Many researchers have proposed the various methods to detect illegal intrusion in order to improve internet environment. Among these researches, IDS(Intrusion Detection System) is classified the most common model to protect network security. In this paper, we propose new log format instead of Apache log format for SSL integrity verification. We translate file-DB log format into R-DB log format. Using these methods we can manage Web server's integrity, and log data is transmitted verification system to be able to perform both primary function of IDS and Web server's integrity management at the same time. The proposed system in this paper is also able to use for wire and wireless environment based on PDA.

Spark-based Network Log Analysis Aystem for Detecting Network Attack Pattern Using Snort (Snort를 이용한 비정형 네트워크 공격패턴 탐지를 수행하는 Spark 기반 네트워크 로그 분석 시스템)

  • Baek, Na-Eun;Shin, Jae-Hwan;Chang, Jin-Su;Chang, Jae-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.4
    • /
    • pp.48-59
    • /
    • 2018
  • Recently, network technology has been used in various fields due to development of network technology. However, there has been an increase in the number of attacks targeting public institutions and companies by exploiting the evolving network technology. Meanwhile, the existing network intrusion detection system takes much time to process logs as the amount of network log increases. Therefore, in this paper, we propose a Spark-based network log analysis system that detects unstructured network attack pattern. by using Snort. The proposed system extracts and analyzes the elements required for network attack pattern detection from large amount of network log data. For the analysis, we propose a rule to detect network attack patterns for Port Scanning, Host Scanning, DDoS, and worm activity, and can detect real attack pattern well by applying it to real log data. Finally, we show from our performance evaluation that the proposed Spark-based log analysis system is more than two times better on log data processing performance than the Hadoop-based system.