• Title/Summary/Keyword: event log

Search Result 143, Processing Time 0.033 seconds

Analysis of Network Log based on Hadoop (하둡 기반 네트워크 로그 시스템)

  • Kim, Jeong-Joon;Park, Jeong-Min;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.125-130
    • /
    • 2017
  • Since field control equipment such as PLC has no function to log key event information in the log, it is difficult to analyze the accident. Therefore, it is necessary to secure information that can analyze when a cyber accident occurs by logging the main event information of the field control equipment such as PLC and IED. The protocol analyzer is required to analyze the field control device (the embedded device) communication protocol for event logging. However, the conventional analyzer, such as Wireshark is difficult to process the data identification and extraction of the large variety of protocols for event logging is difficult analysis of the payload data based and classification. In this paper, we developed a system for Big Data based on field control device communication protocol payload data extraction for event logging of large studies.

A Study on Data Pre-filtering Methods for Fault Diagnosis (시스템 결함원인분석을 위한 데이터 로그 전처리 기법 연구)

  • Lee, Yang-Ji;Kim, Duck-Young;Hwang, Min-Soon;Cheong, Young-Soo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.2
    • /
    • pp.97-110
    • /
    • 2012
  • High performance sensors and modern data logging technology with real-time telemetry facilitate system fault diagnosis in a very precise manner. Fault detection, isolation and identification in fault diagnosis systems are typical steps to analyze the root cause of failures. This systematic failure analysis provides not only useful clues to rectify the abnormal behaviors of a system, but also key information to redesign the current system for retrofit. The main barriers to effective failure analysis are: (i) the gathered data (event) logs are too large in general, and further (ii) they usually contain noise and redundant data that make precise analysis difficult. This paper therefore applies suitable pre-processing techniques to data reduction and feature extraction, and then converts the reduced data log into a new format of event sequence information. Finally the event sequence information is decoded to investigate the correlation between specific event patterns and various system faults. The efficiency of the developed pre-filtering procedure is examined with a terminal box data log of a marine diesel engine.

A Multiclass Classification of the Security Severity Level of Multi-Source Event Log Based on Natural Language Processing (자연어 처리 기반 멀티 소스 이벤트 로그의 보안 심각도 다중 클래스 분류)

  • Seo, Yangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.1009-1017
    • /
    • 2022
  • Log data has been used as a basis in understanding and deciding the main functions and state of information systems. It has also been used as an important input for the various applications in cybersecurity. It is an essential part to get necessary information from log data, to make a decision with the information, and to take a suitable countermeasure according to the information for protecting and operating systems in stability and reliability, but due to the explosive increase of various types and amounts of log, it is quite challenging to effectively and efficiently deal with the problem using existing tools. Therefore, this study has suggested a multiclass classification of the security severity level of multi-source event log using machine learning based on natural language processing. The experimental results with the training and test samples of 472,972 show that our approach has archived the accuracy of 99.59%.

Selecting probability distribution of event mean concentrations from paddy fields (논으로부터 배출되는 유량가중평균 수질농도의 적정 확률분포 선정)

  • Jung, Jaewoon;Choi, Dongho;Yoon, Kwangsik
    • Journal of Environmental Impact Assessment
    • /
    • v.23 no.4
    • /
    • pp.285-295
    • /
    • 2014
  • In this study, we analyzed probability distribution of EMCs (Event Mean Concentration) of COD, TOC, T-N, T-P and SS from rice paddy fields and compared the mean values of observed EMCs and the median values of estimated EMCs ($EMC_{50}$) through probability distribution. The field monitoring was conducted during a period of four crop-years (from May 1, 2008, to September 30. 2011) in a rice cultivation area located in Emda-myun, Hampyeong gun, Jeollanam-do, Korea. Four probability distributions such as Normal, Log-normal, Gamma, and Weibull distribution were used to fit values of EMCs from rice paddy fields. Our results showed that the applicable probability distributions were Normal, Log-normal, and Gamma distribution for COD, and Normal, Log- Normal, Gamma and Weibull distribution for T-N, and Log-normal, Gamma and Weibull distribution for T-P and TOC, and Log-normal and Gamma distribution for SS. Log-normal and Gamma distributions were acceptable for EMCs of all water quality constituents(COD, TOC, T-N, T-P and SS). Meanwhile, mean value of observed COD was similar to median value estimated by the gamma distribution, and TOC, T-N, T-P, and SS were similar to median value estimated by log-normal distribution, respectively.

Diagnosis Analysis of Patient Process Log Data (환자의 프로세스 로그 정보를 이용한 진단 분석)

  • Bae, Joonsoo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.4
    • /
    • pp.126-134
    • /
    • 2019
  • Nowadays, since there are so many big data available everywhere, those big data can be used to find useful information to improve design and operation by using various analysis methods such as data mining. Especially if we have event log data that has execution history data of an organization such as case_id, event_time, event (activity), performer, etc., then we can apply process mining to discover the main process model in the organization. Once we can find the main process from process mining, we can utilize it to improve current working environment. In this paper we developed a new method to find a final diagnosis of a patient, who needs several procedures (medical test and examination) to diagnose disease of the patient by using process mining approach. Some patients can be diagnosed by only one procedure, but there are certainly some patients who are very difficult to diagnose and need to take several procedures to find exact disease name. We used 2 million procedure log data and there are 397 thousands patients who took 2 and more procedures to find a final disease. These multi-procedure patients are not frequent case, but it is very critical to prevent wrong diagnosis. From those multi-procedure taken patients, 4 procedures were discovered to be a main process model in the hospital. Using this main process model, we can understand the sequence of procedures in the hospital and furthermore the relationship between diagnosis and corresponding procedures.

Intrusion Detection on IoT Services using Event Network Correlation (이벤트 네트워크 상관분석을 이용한 IoT 서비스에서의 침입탐지)

  • Park, Boseok;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.1
    • /
    • pp.24-30
    • /
    • 2020
  • As the number of internet-connected appliances and the variety of IoT services are rapidly increasing, it is hard to protect IT assets with traditional network security techniques. Most traditional network log analysis systems use rule based mechanisms to reduce the raw logs. But using predefined rules can't detect new attack patterns. So, there is a need for a mechanism to reduce congested raw logs and detect new attack patterns. This paper suggests enterprise security management for IoT services using graph and network measures. We model an event network based on a graph of interconnected logs between network devices and IoT gateways. And we suggest a network clustering algorithm that estimates the attack probability of log clusters and detects new attack patterns.

A Model for Illegal File Access Tracking Using Windows Logs and Elastic Stack

  • Kim, Jisun;Jo, Eulhan;Lee, Sungwon;Cho, Taenam
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.772-786
    • /
    • 2021
  • The process of tracking suspicious behavior manually on a system and gathering evidence are labor-intensive, variable, and experience-dependent. The system logs are the most important sources for evidences in this process. However, in the Microsoft Windows operating system, the action events are irregular and the log structure is difficult to audit. In this paper, we propose a model that overcomes these problems and efficiently analyzes Microsoft Windows logs. The proposed model extracts lists of both common and key events from the Microsoft Windows logs to determine detailed actions. In addition, we show an approach based on the proposed model applied to track illegal file access. The proposed approach employs three-step tracking templates using Elastic Stack as well as key-event, common-event lists and identify event lists, which enables visualization of the data for analysis. Using the three-step model, analysts can adjust the depth of their analysis.

Event Log Analysis Framework Based on the ATT&CK Matrix in Cloud Environments (클라우드 환경에서의 ATT&CK 매트릭스 기반 이벤트 로그 분석 프레임워크)

  • Yeeun Kim;Junga Kim;Siyun Chae;Jiwon Hong;Seongmin Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing trend of Cloud migration, security threats in the Cloud computing environment have also experienced a significant increase. Consequently, the importance of efficient incident investigation through log data analysis is being emphasized. In Cloud environments, the diversity of services and ease of resource creation generate a large volume of log data. Difficulties remain in determining which events to investigate when an incident occurs, and examining all the extensive log data requires considerable time and effort. Therefore, a systematic approach for efficient data investigation is necessary. CloudTrail, the Amazon Web Services(AWS) logging service, collects logs of all API call events occurring in an account. However, CloudTrail lacks insights into which logs to analyze in the event of an incident. This paper proposes an automated analysis framework that integrates Cloud Matrix and event information for efficient incident investigation. The framework enables simultaneous examination of user behavior log events, event frequency, and attack information. We believe the proposed framework contributes to Cloud incident investigations by efficiently identifying critical events based on the ATT&CK Framework.

Defining and Discovering Cardinalities of the Temporal Workcases from XES-based Workflow Logs

  • Yun, Jaeyoung;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.77-84
    • /
    • 2019
  • Workflow management system is a system that manages the workflow model which defines the process of work in reality. We can define the workflow process by sequencing jobs which is performed by the performers. Using the workflow management system, we can also analyze the flow of the process and revise it more efficiently. Many researches are focused on how to make the workflow process model more efficiently and manage it more easily. Recently, many researches use the workflow log files which are the execution history of the workflow process model performed by the workflow management system. Ourresearch group has many interests in making useful knowledge from the workflow event logs. In this paper we use XES log files because there are many data using this format. This papersuggests what are the cardinalities of the temporal workcases and how to get them from the workflow event logs. Cardinalities of the temporal workcases are the occurrence pattern of critical elements in the workflow process. We discover instance cardinalities, activity cardinalities and organizational resource cardinalities from several XES-based workflow event logs and visualize them. The instance cardinality defines the occurrence of the workflow process instances, the activity cardinality defines the occurrence of the activities and the organizational cardinality defines the occurrence of the organizational resources. From them, we expect to get many useful knowledge such as a patterns of the control flow of the process, frequently executed events, frequently working performer and etc. In further, we even expect to predict the original process model by only using the workflow event logs.

Method for Finding Related Object File for a Computer Forensics in a Log Record of $LogFile of NTFS File System (NTFS 파일시스템의 $LogFile의 로그레코드에 연관된 컴퓨터 포렌식 대상 파일을 찾기 위한 방법)

  • Cho, Gyu-Sang
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.4
    • /
    • pp.1-8
    • /
    • 2012
  • The NTFS journaling file($LogFile) is used to keep the file system clean in the event of a system crash or power failure. The operation on files leaves large amounts of information in the $LogFile. Despite the importance of a journal file as a forensic evidence repository, its structure is not well documented. The researchers used reverse engineering in order to gain a better understanding of the log record structures of address parts, and utilized the address for identifying object files to gain forensic information.