• Title/Summary/Keyword: Log File Analysis

Search Result 61, Processing Time 0.021 seconds

The Service Log Analyser for Blocking Unused Account on Internet Services (인터넷 서비스 미 사용 계정 차단을 위한 서비스 로그 분석기)

  • Jung, Kyu-Cheol;Lee, Jin-Kwan;Lee, Dae-Hyung;Jang, Hae-Suk;Lee, Jong-Chan;Park, Ki-Hong
    • Convergence Security Journal
    • /
    • v.7 no.2
    • /
    • pp.73-80
    • /
    • 2007
  • The fact that since Internet has been spreaded widely to people, Many security problems also have been grown too much. Due to sudden growth, administrator's responsibility for secure network and services has been growing more and more. This paper represents how to prevent account which didn't use for long period on multi domains environment using service log analysis. hence administrator can find security hole on systems and can dealing with it. The Service Log Analyzer is that loading log file which are written by each service and analyzing them. as a result it makes a list named Used User List contains a number of account names which uses specific services. When the time has come - means cron job schedule time, User Usage Shifter is the next runner. it's mission is finding the person who didn't used service for a specific period of time. Then modifying the expire day of the account information.

  • PDF

MOdel-based KERnel Testing (MOKERT) Framework (모델기반의 커널 테스팅 프레이뭐크)

  • Kim, Moon-Zoo;Hong, Shin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.7
    • /
    • pp.523-530
    • /
    • 2009
  • Despite the growing need for customized operating system kernels for embedded devices, kernel development continues to suffer from insufficient reliability and high testing cost for several reasons such as the high complexity of the kernel code. To alleviate these difficulties, this study proposes the MOdel-based KERnel Testing (MOKERT) framework for detection of concurrency bugs in the kernel. MOKERT translates a given C program into a corresponding Promela model, and then tries to find a counter example with regard to a given requirement property, If found, MOKERT executes that counter example on the real kernel code to check whether the counter example is a false alarm or not, The MOKERT framework was applied to the Linux proc file system and confirmed that the bug reported in a ChangeLog actually caused a data race problem, In addition, a new data race bug in the Linux proc file system was found, which causes kernel panic.

Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators

  • Kim, Pil-Jong;Kim, Hong-Gee;Cho, Byeong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.40 no.2
    • /
    • pp.113-122
    • /
    • 2015
  • Objectives: The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). Materials and Methods: The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Results: Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Conclusions: Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

MLOps workflow language and platform for time series data anomaly detection

  • Sohn, Jung-Mo;Kim, Su-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.19-27
    • /
    • 2022
  • In this study, we propose a language and platform to describe and manage the MLOps(Machine Learning Operations) workflow for time series data anomaly detection. Time series data is collected in many fields, such as IoT sensors, system performance indicators, and user access. In addition, it is used in many applications such as system monitoring and anomaly detection. In order to perform prediction and anomaly detection of time series data, the MLOps platform that can quickly and flexibly apply the analyzed model to the production environment is required. Thus, we developed Python-based AI/ML Modeling Language (AMML) to easily configure and execute MLOps workflows. Python is widely used in data analysis. The proposed MLOps platform can extract and preprocess time series data from various data sources (R-DB, NoSql DB, Log File, etc.) using AMML and predict it through a deep learning model. To verify the applicability of AMML, the workflow for generating a transformer oil temperature prediction deep learning model was configured with AMML and it was confirmed that the training was performed normally.

The Proactive Threat Protection Method from Predicting Resignation Throughout DRM Log Analysis and Monitor (DRM 로그분석을 통한 퇴직 징후 탐지와 보안위협 사전 대응 방법)

  • Hyun, Miboon;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.2
    • /
    • pp.369-375
    • /
    • 2016
  • Most companies are willing to spend money on security systems such as DRM, Mail filtering, DLP, USB blocking, etc., for data leakage prevention. However, in many cases, it is difficult that legal team take action for data case because usually the company recognized that after the employee had left. Therefore perceiving one's resignation before the action and building up adequate response process are very important. Throughout analyzing DRM log which records every single file's changes related with user's behavior, the company can predict one's resignation and prevent data leakage before those happen. This study suggests how to prevent for the damage from leaked confidential information throughout building the DRM monitoring process which can predict employee's resignation.

A Study for Detection of the Kernel Backdoor Attack and Design of the restoration system (커널 백도어 공격 탐지 및 복구시스템 설계에 관한 연구)

  • Jeon, Wan-Keun;Oh, Im-Geol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.104-115
    • /
    • 2007
  • As soon as an intrusion is detected by kernel backdoor, the proposed method can be preserve secure and trustworthy evidence even in a damaged system. As an experimental tool, we implement a backup and analysis system, which can be response quickly, to minimize the damages. In this paper, we propose a method, which can restore the deleted log file and analyze the image of a hard disk, to be able to expose the location of a intruder.

  • PDF

Web-Server Security Management system using the correlation analysis (상호연관성 분석을 이용한 웹서버 보안관리 시스템)

  • Kim Sung-Rak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.157-165
    • /
    • 2004
  • The paper suggests that web-server security management system will be able to detect the web service attack accurately and swiftly which is keeping on increasing at the moment and reduce the possibility of the false positive detection. This system gathers the results of many unit security modules at the real time and enhances the correctness of the detection through the correlation analysis procedure. The unit security module consists of Network based Intrusion Detection System module. File Integrity Check module. System Log Analysis module, and Web Log Analysis and there is the Correlation Analysis module that analyzes the correlations on the spot as a result of each unit security module processing. The suggested system provides the feasible framework of the range extension of correlation analysis and the addition of unit security module, as well as the correctness of the attack detection. In addition, the attack detection system module among the suggested systems has the faster detection time by means of restructuring Snort with multi thread base system. WSM will be improved through shortening the processing time of many unit security modules with heavy traffic.

  • PDF

Drone Flight Record Forensic System through DUML Packet Analysis (DUML 패킷 분석을 통한 드론 비행기록 포렌식 시스템)

  • YeoHoon Yoon;Joobeom Yun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.103-114
    • /
    • 2024
  • In a situation where drone-related crimes continue to rise, research in drone forensics becomes crucial for preventing and responding to incidents involving drones. Conducting forensic analysis on flight record files stored internally is essential for investigating illegal activities. However, analyzing flight record files generated through the exclusive DUML protocol requires a deep understanding of the protocol's structure and characteristics. Additionally, a forensic analysis tool capable of handling cryptographic payloads and analyzing various drone models is imperative. Therefore, this study presents the methods and characteristics of flight record files generated by drones. It also explains the structure of the flight record file and the features of the DUML packet. Ultimately, we conduct forensic analysis based on the presented structure of the DUML packet and propose an extension forensic analysis system that operates more universally than existing tools, performing expanded syntactic analysis.

An Efficient Scheme of Performing Pending Actions for the Removal of Datavase Files (데이터베이스 파일의 삭제를 위한 미처리 연산의 효율적 수행 기법)

  • Park, Jun-Hyun;Park, Young-Chul
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.494-511
    • /
    • 2001
  • In the environment that database management systems manage disk spaces for storing databases directly, this paper proposes a correct and efficient scheme of performing pending actions for the removal of database files. As for performing pending actions, upon performing recovery, the recovery process must identify unperformed pending actions of not-yet-terminated transactions and then perform those actions completely. Making the recovery process identify those actions through the analysis of log records in the log file is the basic idea of this paper. This scheme, as an extension of the execution of transactions, fuzzy checkpoint, and recovery of ARIES, uses the following methods: First, to identify not-yet-terminated transactions during recovery, transactions perform pending actions after writing 'pa_start'log records that signify both the commit of transactions and the start of executing pending actions, and then write 'eng'log records. Second, to restore pending-actions-lists of not-yet-terminated transactions during recovery, each transaction records its pending-actions-list in 'pa_start'log record and the checkpoint process records pending-actions-lists of transactions that are decided to be committed in 'end_chkpt'log record. Third, to identify the next pending action to perform during recovery, whenever a page is updated during the execution of pending actions, transactions record the information that identifies the next pending action to perform in the log record that has the redo information against the page.

  • PDF

Improving Performance based on Processing Analysis of Big data log file (벅데이터 로그파일 처리 분석을 통한 성능 개선 방안)

  • Lee, Jaehan;Yu, Heonchang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.539-541
    • /
    • 2016
  • 최근 빅데이터 분석을 위해 아파치 하둡(Apache Hadoop) 기반 에코시스템(Ecosystern)이 다양하게 활용되고 있다. 본 논문에서는 수집된 로그 데이터를 가공하여 데이터베이스에 로드하는 과정을 효율적으로 처리하기 위한 성능 평가를 수행한다. 이를 기반으로 텍스트 파일의 로그 데이터를 자바 코드로 개발된 프로그램에서 JDBC를 이용하여 오라클(Oracle) 데이터베이스에 삽입(Insert)하는 과정의 성능을 개선하기 위한 방안을 제안한다. 대용량 로그 파일의 효율적인 처리를 위해 하둡 에코시스템을 이용하여 처리 속도를 개선하고, 최근 인메모리(In-Mernory) 처리 방식으로 빠른 처리 속도로 인해 각광받고 있는 아파치 스파크(Apache Spark)를 이용한 처리와의 성능 평가를 수행한다. 이 연구를 통해 최적의 로그데이터 처리 시스템의 구축 방안을 제안한다.