• 제목/요약/키워드: Log data analysis

검색결과 972건 처리시간 0.028초

로그형 평균값함수를 고려한 소프트웨어 신뢰성모형에 대한 비교연구 (A Comparative Study of Software Reliability Model Considering Log Type Mean Value Function)

  • 신현철;김희철
    • 디지털산업정보학회논문지
    • /
    • 제10권4호
    • /
    • pp.19-27
    • /
    • 2014
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log type mean value function (Musa-Okumoto and log power model), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure using real data set for the sake of proposing log type mean value function was employed. This analysis of failure data compared with log type mean value function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

강우자료의 분리효과 (Separation Effect Analysis for Rainfall Data)

  • 김양수;허준행
    • 물과 미래
    • /
    • 제26권4호
    • /
    • pp.73-83
    • /
    • 1993
  • 본 연구에서는 우리나라 강우자료에 대한 분리효과를 검토하였다. 2변수 대수정규분포, 3변수 대수정규분포등, TYPE-극치분포, 2변수 Gamma 분포, 3변수 Gamma 분포, Log-Pearson Type-분포, GEV분포 등 7개 분포함수를 선정하고, Monte C미개 실험을 이용하여 과거 강우기록 자료로부터 얻은 왜곡도의 평균과 표준편차와 각 분포형들로부터 모의된 왜곡도의 평균과 표준편차와 차이를 분석하였다. 그 결과 우리나라 강우자료는 3변수 Gamma 분포를 제외한 나머지 6개 분포형에서 분리현상을 보였다.

  • PDF

석탄층 검층자료의 정량적 해석법 연구 (Quantitative Analysis of Coal Logging Data)

  • 권병두;손세조;손정우
    • 자원환경지질
    • /
    • 제21권1호
    • /
    • pp.85-96
    • /
    • 1988
  • Geophysical well logging at various coal fields were carried out to study the characteristic response of domestic coal seams. Also a computer program is developed for quantitative analysis of coal logging data. Most coal seams penetrated by the drill holes, where the well logging were carried out, showed poor thickness and quality, and were severely altered. Therefore, majority of log data are inadequate for detailed quantitative analysis. The logs show, however, typical characteristics with related to coal seams, but interpretation should be made with caution because certain log response of demestic coals, mostly anthracite, are quite different to those of foreign coals, mostly bituminous. The developed comuter program has been proved as an effective one for identification of coal seams and lithology anslysis, and is expected to be succesfully used for coal quality analysis in cases of more diversified log data of good quality being obtained.

  • PDF

불교란 점토 압밀시험 결과의 새로운 해석법 (A New Analysis Method of the Consolidation Test Data for an Undisturbed Clay)

  • 박종화;고우모또타쯔야
    • 한국농공학회지
    • /
    • 제44권6호
    • /
    • pp.106-114
    • /
    • 2002
  • In this study, the results of a series of consolidation test for undisturbed Ariake clay in Japan were analyzed by three methods, e-log p (e: void ratio, p: consolidation pressure), log e-log p and n-log p (n: porosity). Moreover, the characteristics of each analysis method were studied. For undisturbed Ariake clay, the log o-Log p and the n-log p relationships can be found as two groups of straight lines of different gradients, but both the elastic consolidation and plastic consolidation regions of e-log p relationship are expressed as a curve. In this paper, the porosity of consolidation yield n$\_$y/, consolidation yield stress p$\_$y/, and the gradient of the plastic consolidation region C$\_$p/ were represented by the log e-log p method, and n$\_$c/, P$\_$cn/ and C$\_$cn/ were represented by the n-log p method. The meaning and the relationships of each value were studied, and the interrelationships among compression indices i.e. C$\_$cn/, C$\_$p/ and C$\_$c/ are obtained from each analysis method as a function of initial porosity n$\_$0/.

XML기반 Windows Event Log Forensic 도구 설계 및 구현 (XML-based Windows Event Log Forensic tool design and implementation)

  • 김종민;이동휘
    • 융합보안논문지
    • /
    • 제20권5호
    • /
    • pp.27-32
    • /
    • 2020
  • Windows Event Log에는 시스템의 전반적인 동작들을 정의하고 있는 Log이며, 해당 파일에는 사용자의 여러 행위 및 이상 징후를 탐지할 수 있는 데이터가 저장되어 있다. 하지만 행위마다 Event Log가 발생함으로써, 로그들을 분석할 때, 상당한 시간이 소요된다. 따라서 본 연구에서는 NSA에서 발표한 "Spotting the Adversary with Windows Event Log Monitoring"의 주요 Event Log 목록을 바탕으로 XML 기반한 Event Log 분석 도구를 설계 및 구현 하였다.

방화벽 로그를 이용한 침입탐지기법 연구 (A Study on the Intrusion Detection Method using Firewall Log)

  • 윤성종;김정호
    • Journal of Information Technology Applications and Management
    • /
    • 제13권4호
    • /
    • pp.141-153
    • /
    • 2006
  • According to supply of super high way internet service, importance of security becomes more emphasizing. Therefore, flawless security solution is needed for blocking information outflow when we send or receive data. large enterprise and public organizations can react to this problem, however, small organization with limited work force and capital can't. Therefore they need to elevate their level of information security by improving their information security system without additional money. No hackings can be done without passing invasion blocking system which installed at the very front of network. Therefore, if we manage.isolation log effective, we can recognize hacking trial at the step of pre-detection. In this paper, it supports information security manager to execute isolation log analysis very effectively. It also provides isolation log analysis module which notifies hacking attack by analyzing isolation log.

  • PDF

남한지역 검층밀도 자료의 특성 분석 (Frequency Distribution Characteristics of Formation Density Derived from Log and Core Data throughout the Southern Korean Peninsula)

  • 김영화;김기환;김종만;황세호
    • 지질공학
    • /
    • 제25권2호
    • /
    • pp.281-290
    • /
    • 2015
  • 남한 지역에서 수행된 검층밀도 자료를 수집하고 코어밀도 자료와 비교 분석하였다. 먼저 코어밀도와 검층밀도 자료의 비교로부터 검층밀도가 비이상적으로 낮은 현상이 얻어졌으며 이는 소선원 검층밀도 자료에서의 비이상적으로 낮은 밀도와 연관되어 있음이 밝혀졌다. 표준선원과 소선원 자료간의 큰 밀도 차이를 비롯하여 표준선원 검층밀도와 코어밀도 간의 상관성 비교에서 나타난 분포곡선의 형태, 평균값, 표준편차 등 모든 결과들이 소선원 밀도자료의 품질에 이상이 있음을 보였다. 소선원 밀도자료에서의 품질 이상은 검층밀도 결정에서 소선원 밀도검층기가 지니는 선원 특성과 연결되었으며 결론적으로 지금까지의 소선원 밀도자료는 정확성을 유지하기 위한 최소의 조건이 충족되지 못한 상태에서 얻어진 것으로 판단하였다. 끝으로 코어자료와 표준선원 자료를 사용하여 남한지역 주요 지층의 밀도 분포 특성이 결정되었다.

MLE for Incomplete Contingency Tables with Lagrangian Multiplier

  • Kang, Shin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권3호
    • /
    • pp.919-925
    • /
    • 2006
  • Maximum likelihood estimate(MLE) is obtained from the partial log-likelihood function for the cell probabilities of two way incomplete contingency tables proposed by Chen and Fienberg(1974). The partial log-likelihood function is modified by adding lagrangian multiplier that constraints can be incorporated with. Variances of MLE estimators of population proportions are derived from the matrix of second derivatives of the loglikelihood with respect to cell probabilities. Simulation results, when data are missing at random, reveal that Complete-case(CC) analysis produces biased estimates of joint probabilities under MAR and less efficient than either MLE or MI. MLE and MI provides consistent results under either the MAR situation. MLE provides more efficient estimates of population proportions than either multiple imputation(MI) based on data augmentation or complete case analysis. The standard errors of MLE from the proposed method using lagrangian multiplier are valid and have less variation than the standard errors from MI and CC.

  • PDF

A Log Analysis System with REST Web Services for Desktop Grids and its Application to Resource Group-based Task Scheduling

  • Gil, Joon-Min;Kim, Mi-Hye
    • Journal of Information Processing Systems
    • /
    • 제7권4호
    • /
    • pp.707-716
    • /
    • 2011
  • It is important that desktop grids should be able to aggressively deal with the dynamic properties that arise from the volatility and heterogeneity of resources. Therefore, it is required that task scheduling be able to positively consider the execution behavior that is characterized by an individual resource. In this paper, we implement a log analysis system with REST web services, which can analyze the execution behavior by utilizing the actual log data of desktop grid systems. To verify the log analysis system, we conducted simulations and showed that the resource group-based task scheduling, based on the analysis of the execution behavior, offers a faster turnaround time than the existing one even if few resources are used.

Auto Configuration Module for Logstash in Elasticsearch Ecosystem

  • Ahmed, Hammad;Park, Yoosang;Choi, Jongsun;Choi, Jaeyoung
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2018년도 추계학술발표대회
    • /
    • pp.39-42
    • /
    • 2018
  • Log analysis and monitoring have a significant importance in most of the systems. Log management has core importance in applications like distributed applications, cloud based applications, and applications designed for big data. These applications produce a large number of log files which contain essential information. This information can be used for log analytics to understand the relevant patterns from varying log data. However, they need some tools for the purpose of parsing, storing, and visualizing log informations. "Elasticsearch, Logstash, and Kibana"(ELK Stack) is one of the most popular analyzing tools for log management. For the ingestion of log files configuration files have a key importance, as they cover all the services needed to input, process, and output the log files. However, creating configuration files is sometimes very complicated and time consuming in many applications as it requires domain expertise and manual creation. In this paper, an auto configuration module for Logstash is proposed which aims to auto generate the configuration files for Logstash. The primary purpose of this paper is to provide a mechanism, which can be used to auto generate the configuration files for corresponding log files in less time. The proposed module aims to provide an overall efficiency in the log management system.