• Title/Summary/Keyword: Event Logs

Search Result 45, Processing Time 0.027 seconds

Mining Social Networks from business process log (비즈니스 프로세스 수행자들의 Social Network Mining에 대한 연구)

  • Song, Min-Seok;Aalst, W.M.P Van Der;Choe, In-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.05a
    • /
    • pp.544-547
    • /
    • 2004
  • Current increasingly information systems log historic information in a systematic way. Not only workflow management systems, but also ERP, CRM, SCM, and B2B systems often provide a so-called 'event log'. Unfortunately, the information in these event logs is rarely used to analyze the underlying processes. Process mining aims at improving this problem by providing techniques and tools for discovering process, control, data, organizational, and social structures from event logs. This paper focuses on the mining social networks. This is possible because event logs typically record information about the users executing the activities recorded in the log. To do this we combine concepts from workflow management and social network analysis. This paper introduces the approach and presents a tool to mine social networks from event logs.

  • PDF

A Model for Illegal File Access Tracking Using Windows Logs and Elastic Stack

  • Kim, Jisun;Jo, Eulhan;Lee, Sungwon;Cho, Taenam
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.772-786
    • /
    • 2021
  • The process of tracking suspicious behavior manually on a system and gathering evidence are labor-intensive, variable, and experience-dependent. The system logs are the most important sources for evidences in this process. However, in the Microsoft Windows operating system, the action events are irregular and the log structure is difficult to audit. In this paper, we propose a model that overcomes these problems and efficiently analyzes Microsoft Windows logs. The proposed model extracts lists of both common and key events from the Microsoft Windows logs to determine detailed actions. In addition, we show an approach based on the proposed model applied to track illegal file access. The proposed approach employs three-step tracking templates using Elastic Stack as well as key-event, common-event lists and identify event lists, which enables visualization of the data for analysis. Using the three-step model, analysts can adjust the depth of their analysis.

Tailoring Operations based on Relational Algebra for XES-based Workflow Event Logs

  • Yun, Jaeyoung;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Process mining is state-of-the-art technology in the workflow field. Recently, process mining becomes more important because of the fact that it shows the status of the actual behavior of the workflow model. However, as the process mining get focused and developed, the material of the process mining - workflow event log - also grows fast. Thus, the process mining algorithms cannot operate with some data because it is too large. To solve this problem, there should be a lightweight process mining algorithm, or the event log must be divided and processed partly. In this paper, we suggest a set of operations that control and edit XES based event logs for process mining. They are designed based on relational algebra, which is used in database management systems. We designed three operations for tailoring XES event logs. Select operation is an operation that gets specific attributes and excludes others. Thus, the output file has the same structure and contents of the original file, but each element has only the attributes user selected. Union operation makes two input XES files into one XES file. Two input files must be from the same process. As a result, the contents of the two files are integrated into one file. The final operation is a slice. It divides anXES file into several files by the number of traces. We will show the design methods and details below.

A MapReduce-Based Workflow BIG-Log Clustering Technique (맵리듀스기반 워크플로우 빅-로그 클러스터링 기법)

  • Jin, Min-Hyuck;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.87-96
    • /
    • 2019
  • In this paper, we propose a MapReduce-supported clustering technique for collecting and classifying distributed workflow enactment event logs as a preprocessing tool. Especially, we would call the distributed workflow enactment event logs as Workflow BIG-Logs, because they are satisfied with as well as well-fitted to the 5V properties of BIG-Data like Volume, Velocity, Variety, Veracity and Value. The clustering technique we develop in this paper is intentionally devised for the preprocessing phase of a specific workflow process mining and analysis algorithm based upon the workflow BIG-Logs. In other words, It uses the Map-Reduce framework as a Workflow BIG-Logs processing platform, it supports the IEEE XES standard data format, and it is eventually dedicated for the preprocessing phase of the ${\rho}$-Algorithm that is a typical workflow process mining algorithm based on the structured information control nets. More precisely, The Workflow BIG-Logs can be classified into two types: of activity-based clustering patterns and performer-based clustering patterns, and we try to implement an activity-based clustering pattern algorithm based upon the Map-Reduce framework. Finally, we try to verify the proposed clustering technique by carrying out an experimental study on the workflow enactment event log dataset released by the BPI Challenges.

Intrusion Detection on IoT Services using Event Network Correlation (이벤트 네트워크 상관분석을 이용한 IoT 서비스에서의 침입탐지)

  • Park, Boseok;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.1
    • /
    • pp.24-30
    • /
    • 2020
  • As the number of internet-connected appliances and the variety of IoT services are rapidly increasing, it is hard to protect IT assets with traditional network security techniques. Most traditional network log analysis systems use rule based mechanisms to reduce the raw logs. But using predefined rules can't detect new attack patterns. So, there is a need for a mechanism to reduce congested raw logs and detect new attack patterns. This paper suggests enterprise security management for IoT services using graph and network measures. We model an event network based on a graph of interconnected logs between network devices and IoT gateways. And we suggest a network clustering algorithm that estimates the attack probability of log clusters and detects new attack patterns.

Defining and Discovering Cardinalities of the Temporal Workcases from XES-based Workflow Logs

  • Yun, Jaeyoung;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.77-84
    • /
    • 2019
  • Workflow management system is a system that manages the workflow model which defines the process of work in reality. We can define the workflow process by sequencing jobs which is performed by the performers. Using the workflow management system, we can also analyze the flow of the process and revise it more efficiently. Many researches are focused on how to make the workflow process model more efficiently and manage it more easily. Recently, many researches use the workflow log files which are the execution history of the workflow process model performed by the workflow management system. Ourresearch group has many interests in making useful knowledge from the workflow event logs. In this paper we use XES log files because there are many data using this format. This papersuggests what are the cardinalities of the temporal workcases and how to get them from the workflow event logs. Cardinalities of the temporal workcases are the occurrence pattern of critical elements in the workflow process. We discover instance cardinalities, activity cardinalities and organizational resource cardinalities from several XES-based workflow event logs and visualize them. The instance cardinality defines the occurrence of the workflow process instances, the activity cardinality defines the occurrence of the activities and the organizational cardinality defines the occurrence of the organizational resources. From them, we expect to get many useful knowledge such as a patterns of the control flow of the process, frequently executed events, frequently working performer and etc. In further, we even expect to predict the original process model by only using the workflow event logs.

Event Log Validity Analysis for Detecting Threats by Insiders in Control System

  • Kim, Jongmin;Kang, Jiwon;Lee, DongHwi
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.16-21
    • /
    • 2020
  • Owing to the convergence of the communication network with the control system and public network, security threats, such as information leakage and falsification, have become possible through various routes. If we examine closely at the security type of the current control system, the operation of the security system focuses on the threats made from outside to inside, so the study on the detection system of the security threats conducted by insiders is inadequate. Thus, this study, based on "Spotting the Adversary with Windows Event Log Monitoring," published by the National Security Agency, found that event logs can be utilized for the detection and maneuver of threats conducted by insiders, by analyzing the validity of detecting insider threats to the control system with the list of important event logs.

Discovering Redo-Activities and Performers' Involvements from XES-Formatted Workflow Process Enactment Event Logs

  • Pham, Dinh-Lam;Ahn, Hyun;Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4108-4122
    • /
    • 2019
  • Workflow process mining is becoming a more and more valuable activity in workflow-supported enterprises, and through which it is possible to achieve the high levels of qualitative business goals in terms of improving the effectiveness and efficiency of the workflow-supported information systems, increasing their operational performances, reducing their completion times with minimizing redundancy times, and saving their managerial costs. One of the critical challenges in the workflow process mining activity is to devise a reasonable approach to discover and recognize the bottleneck points of workflow process models from their enactment event histories. We have intuitively realized the fact that the iterative process pattern of redo-activities ought to have the high possibility of becoming a bottleneck point of a workflow process model. Hence, we, in this paper, propose an algorithmic approach and its implementation to discover the redo-activities and their performers' involvements patterns from workflow process enactment event logs. Additionally, we carry out a series of experimental analyses by applying the implemented algorithm to four datasets of workflow process enactment event logs released from the BPI Challenges. Finally, those discovered redo-activities and their performers' involvements patterns are visualized in a graphical form of information control nets as well as a tabular form of the involvement percentages, respectively.

Correlation Analysis of Event Logs for System Fault Detection (시스템 결함 분석을 위한 이벤트 로그 연관성에 관한 연구)

  • Park, Ju-Won;Kim, Eunhye;Yeom, Jaekeun;Kim, Sungho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.2
    • /
    • pp.129-137
    • /
    • 2016
  • To identify the cause of the error and maintain the health of system, an administrator usually analyzes event log data since it contains useful information to infer the cause of the error. However, because today's systems are huge and complex, it is almost impossible for administrators to manually analyze event log files to identify the cause of an error. In particular, as OpenStack, which is being widely used as cloud management system, operates with various service modules being linked to multiple servers, it is hard to access each node and analyze event log messages for each service module in the case of an error. For this, in this paper, we propose a novel message-based log analysis method that enables the administrator to find the cause of an error quickly. Specifically, the proposed method 1) consolidates event log data generated from system level and application service level, 2) clusters the consolidated data based on messages, and 3) analyzes interrelations among message groups in order to promptly identify the cause of a system error. This study has great significance in the following three aspects. First, the root cause of the error can be identified by collecting event logs of both system level and application service level and analyzing interrelations among the logs. Second, administrators do not need to classify messages for training since unsupervised learning of event log messages is applied. Third, using Dynamic Time Warping, an algorithm for measuring similarity of dynamic patterns over time increases accuracy of analysis on patterns generated from distributed system in which time synchronization is not exactly consistent.

Refining massive event logs to evaluate performance measures of the container terminal (컨테이너 터미널 성능평가를 위한 대용량 이벤트 로그 정제 방안 연구)

  • Park, Eun-Jung;Bae, Hyerim
    • The Journal of Bigdata
    • /
    • v.4 no.1
    • /
    • pp.11-27
    • /
    • 2019
  • There is gradually being a decrease in earnings rate of the container terminals because of worsened business environment. To enhance global competitiveness of terminal, operators of the container terminal have been attempting to deal with problems of operations through analyzing overall the terminal operations. For improving operations of the container terminal, the operators try to efforts about analyzing and utilizing data from the database which collects and stores data generated during terminal operation in real time. In this paper, we have analyzed the characteristics of operating processes and defined the event log data to generate container processes and CKO processes using stored data in TOS (terminal operating system). And we have explained how imperfect event logs creating non-normal processes are refined effectively by analyzing the container and CKO processes. We also have proposed the framework to refine the event logs easily and fast. To validate the proposed framework we have implemented it using python2.7 and tested it using the data collected from real container terminal as input data. In consequence we could have verified that the non-normal processes in the terminal operations are greatly improved.

  • PDF