• Title/Summary/Keyword: Data log

Search Result 2,118, Processing Time 0.025 seconds

A Non-fixed Log Area Management Technique in Block for Flash Memory DBMS (플래시메모리 DBMS를 위한 블록의 비고정적 로그 영역 관리 기법)

  • Cho, Bye-Won;Han, Yong-Koo;Lee, Young-Koo
    • Journal of KIISE:Databases
    • /
    • v.37 no.5
    • /
    • pp.238-249
    • /
    • 2010
  • Flash memory has been studied as a storage medium in order to improve the performance of the system using its high computing speed in the DBMS field where frequent data access is needed. The most difficulty using the flash memory is the performance degradation and the life span shortening of flash memory coming from inefficient in-place update. Log based approaches have been studied to solve inefficient in-place update problem in the DBMS where write operations occur in smaller size of data than page frequently. However the existing log based approaches suffer from the frequent merging operations, which are the principal cause of performance deterioration. Thus is because their fixed log area management can not guarantee a sufficient space for logs. In this paper, we propose non-fixed log area management technique that can minimize the occurrence of the merging operations by promising an enough space for logs. We also suggest the cost calculation model of the optimal log sector number minimizing the system operation cost in a block. In experiment, we show that our non-fixed log area management technique can have the improved performance compared to existing approaches.

FUSE-based Syslog Agent for File Access Log (파일 접근 로그를 위한 FUSE 기반의 Syslog 에이전트)

  • Son, Tae-Yeong;Rim, Seong-Rak
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.623-628
    • /
    • 2016
  • Because the log information provides some critical clues for solving the problem of illegal system access, it is very important for a system administrator to gather and analyze the log data. In a Linux system, the syslog utility has been used to gather various kinds of log data. Unfortunately, there is a limitation that a system administrator should rely on the services only provided by the syslog utility. To overcome this limitation, this paper suggests a syslog agent that allows the system administrator to gather log information for file access that is not serviced by syslog utility. The basic concept of the suggested syslog agent is that after creating a FUSE, it stores the accessed information of the files under the directory on which FUSE has been mounted into the log file via syslog utility. To review its functional validity, a FUSE file system was implemented on Linux (Ubunt 14.04), and the log information of a file access was collected and confirmed.

Bioequivalence of Shinil Cefadroxil Capsule to Duricef Capsule (cefadroxil 500 mg) (듀리세프 캅셀(세파드록실 500 mg)에 대한 신일 세파드록실 캅셀의 생물학적 동등성)

  • 유호정;최민구;김경식;정석재;심창구
    • Biomolecules & Therapeutics
    • /
    • v.10 no.4
    • /
    • pp.303-308
    • /
    • 2002
  • A bioequivalence study of Shin II Cefadroxil capsule (Shin II Pharm. Co. Ltd.) to Duricef capsule(Bo Ryung Pharm. Co. Ltd.), each containing 500 mg of cefadroxil, was conducted. Twenty three healthy Korean male subjects administered each formulation at the dose of 1 capsule (500 mg as cefadroxil) in 2 $\times$ 2 cross-over study. There was a I-week washout period between the doses. Plasma concentrations of cefadroxil were monitored for a period of 8 hr after each administration by an LC/UV method. Area under the plasma concentration-time curve up to 8 hr ($AUC_t$) was calculated by a linear trapezoidal method. $C_{max}$ was compiled from the plasma drug concentration-time data. ANOVA test was conducted for logarithmically transformed $AUC_t$ and $C_{max}$ The results showed that there are no significant differences in $AUC_t$ and $C_{max}$ between the two formulations: The differences between d1e formulations in these log transformed parameters were all for less than 20% (i.e., -0.57%, 3.84% for $AUC_t$ and $C_{max}$, respectively). The 90% confidence intervals for the log transformed data were within the acceptance range of log 0.8 to log 1.25 (i.e., log 0.94~log 1.04 and log 0.95~log 1.10 for $AUC_t$ and $C_{max}$, respectively). Based on d1e bioequivalence criteria of KFDA guidelines, the two formulations of cefadroxil were concluded to be bioequivalent.

A Study on implementation model for security log analysis system using Big Data platform (빅데이터 플랫폼을 이용한 보안로그 분석 시스템 구현 모델 연구)

  • Han, Ki-Hyoung;Jeong, Hyung-Jong;Lee, Doog-Sik;Chae, Myung-Hui;Yoon, Cheol-Hee;Noh, Kyoo-Sung
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.351-359
    • /
    • 2014
  • The log data generated by security equipment have been synthetically analyzed on the ESM(Enterprise Security Management) base so far, but due to its limitations of the capacity and processing performance, it is not suited for big data processing. Therefore the another way of technology on the big data platform is necessary. Big Data platform can achieve a large amount of data collection, storage, processing, retrieval, analysis, and visualization by using Hadoop Ecosystem. Currently ESM technology has developed in the way of SIEM (Security Information & Event Management) technology, and to implement security technology in SIEM way, Big Data platform technology is essential that can handle large log data which occurs in the current security devices. In this paper, we have a big data platform Hadoop Ecosystem technology for analyzing the security log for sure how to implement the system model is studied.

Design and Implementation of a Hadoop-based Efficient Security Log Analysis System (하둡 기반의 효율적인 보안로그 분석시스템 설계 및 구현)

  • Ahn, Kwang-Min;Lee, Jong-Yoon;Yang, Dong-Min;Lee, Bong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.8
    • /
    • pp.1797-1804
    • /
    • 2015
  • Integrated log management system can help to predict the risk of security and contributes to improve the security level of the organization, and leads to prepare an appropriate security policy. In this paper, we have designed and implemented a Hadoop-based log analysis system by using distributed database model which can store large amount of data and reduce analysis time by automating log collecting procedure. In the proposed system, we use the HBase in order to store a large amount of data efficiently in the scale-out fashion and propose an easy data storing scheme for analysing data using a Hadoop-based normal expression, which results in improving data processing speed compared to the existing system.

Correlation Analysis of Event Logs for System Fault Detection (시스템 결함 분석을 위한 이벤트 로그 연관성에 관한 연구)

  • Park, Ju-Won;Kim, Eunhye;Yeom, Jaekeun;Kim, Sungho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.2
    • /
    • pp.129-137
    • /
    • 2016
  • To identify the cause of the error and maintain the health of system, an administrator usually analyzes event log data since it contains useful information to infer the cause of the error. However, because today's systems are huge and complex, it is almost impossible for administrators to manually analyze event log files to identify the cause of an error. In particular, as OpenStack, which is being widely used as cloud management system, operates with various service modules being linked to multiple servers, it is hard to access each node and analyze event log messages for each service module in the case of an error. For this, in this paper, we propose a novel message-based log analysis method that enables the administrator to find the cause of an error quickly. Specifically, the proposed method 1) consolidates event log data generated from system level and application service level, 2) clusters the consolidated data based on messages, and 3) analyzes interrelations among message groups in order to promptly identify the cause of a system error. This study has great significance in the following three aspects. First, the root cause of the error can be identified by collecting event logs of both system level and application service level and analyzing interrelations among the logs. Second, administrators do not need to classify messages for training since unsupervised learning of event log messages is applied. Third, using Dynamic Time Warping, an algorithm for measuring similarity of dynamic patterns over time increases accuracy of analysis on patterns generated from distributed system in which time synchronization is not exactly consistent.

Analysis of Web Log Using Clementine Data Mining Solution (클레멘타인 데이터마이닝 솔루션을 이용한 웹 로그 분석)

  • Kim, Jae-Kyeong;Lee, Kun-Chang;Chung, Nam-Ho;Kwon, Soon-Jae;Cho, Yoon-Ho
    • Information Systems Review
    • /
    • v.4 no.1
    • /
    • pp.47-67
    • /
    • 2002
  • Since mid 90's, most of firms utilizing web as a communication vehicle with customers are keenly interested in web log file which contains a lot of trails customers left on the web, such as IP address, reference address, cookie file, duration time, etc. Therefore, an appropriate analysis of the web log file leads to understanding customer's behaviors on the web. Its analysis results can be used as an effective marketing information for locating potential target customers. In this study, we introduced a web mining technique using Clementine of SPSS, and analyzed a set of real web log data file on a certain Internet hub site. We also suggested a process of various strategies build-up based on the web mining results.

Testing of a discontinuity point in the log-variance function based on likelihood (가능도함수를 이용한 로그분산함수의 불연속점 검정)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Let us consider that the variance function in regression model has a discontinuity/change point at unknown location. Yu and Jones (2004) proposed the local polynomial fit to estimate the log-variance function which break the positivity of the variance. Using the local polynomial fit, Huh (2008) estimate the discontinuity point of the log-variance function. We propose a test for the existence of a discontinuity point in the log-variance function with the estimated jump size in Huh (2008). The proposed method is based on the asymptotic distribution of the estimated jump size. Numerical works demonstrate the performance of the method.

  • PDF

Bioequivalence Study of Enalapril Tablet to $Lenipril^{(R)}$ Tablet (레니프릴정(에날라프릴 10 mg)에 대한 에날라프릴정의 생물학적 동등성 평가)

  • Noh, Keum-Han;Bae, Kyoung-Jin;Kang, Won-Ku
    • Korean Journal of Clinical Pharmacy
    • /
    • v.19 no.1
    • /
    • pp.61-64
    • /
    • 2009
  • The study was designed to compare the rate and extent of absorption of two enalapril tablets (10 mg), which has been widely used for the treatment of hypertension. This bioequivalence study was conducted using a standard preparation as reference and a generic as test in 24 male healthy volunteers. After an overnight fast, a single dose of the test or reference drugs were given with a washout period of 7 days. Heparinized blood samples were serially collected up to 10 hr. Plasma enalapril concentrations were quantified using a validated LC-MS/MS method. The data obtained for each subject was evaluated for $C_{max}$ and $AUC_{10hr}$ with respect to 90% confidence interval for log-transformed data. The 90% confidence intervals were log(0.9384)~log(1.1160) for $AUC_{10hr}$ and log(0.9482)~log(1.1474) for $C_{max}$. Thus, we concluded that the test and reference formulation are bioequivalent in terms of rate and extent of absorption.

  • PDF

A Study on Data Pre-filtering Methods for Fault Diagnosis (시스템 결함원인분석을 위한 데이터 로그 전처리 기법 연구)

  • Lee, Yang-Ji;Kim, Duck-Young;Hwang, Min-Soon;Cheong, Young-Soo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.2
    • /
    • pp.97-110
    • /
    • 2012
  • High performance sensors and modern data logging technology with real-time telemetry facilitate system fault diagnosis in a very precise manner. Fault detection, isolation and identification in fault diagnosis systems are typical steps to analyze the root cause of failures. This systematic failure analysis provides not only useful clues to rectify the abnormal behaviors of a system, but also key information to redesign the current system for retrofit. The main barriers to effective failure analysis are: (i) the gathered data (event) logs are too large in general, and further (ii) they usually contain noise and redundant data that make precise analysis difficult. This paper therefore applies suitable pre-processing techniques to data reduction and feature extraction, and then converts the reduced data log into a new format of event sequence information. Finally the event sequence information is decoded to investigate the correlation between specific event patterns and various system faults. The efficiency of the developed pre-filtering procedure is examined with a terminal box data log of a marine diesel engine.