• Title/Summary/Keyword: Log data

Search Result 2,131, Processing Time 0.034 seconds

The Comparative Study of NHPP Software Reliability Model Exponential and Log Shaped Type Hazard Function from the Perspective of Learning Effects (지수형과 로그형 위험함수 학습효과에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 비교연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.1-10
    • /
    • 2012
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The finite failure nonhomogeneous Poisson process models presented and the life distribution applied exponential and log shaped type hazard function. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than autonomous errors-detected factor that is generally efficient model could be confirmed. This paper, a failure data analysis of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error and coefficient of determination.

Estimations of the student numbers by nonlinear regression model (비선형 회귀모형을 이용한 학년별 학생수 추계)

  • Yoon, Yong-Hwa;Kim, Jong-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.1
    • /
    • pp.71-77
    • /
    • 2012
  • This paper introduces the projection methods by nonlinear regression model. To predict the student numbers, a log model and an involution model as the kind of a trend-extrapolation method are used. Empirical evidence shows that a projection by log model is better than by involution model with the confidence interval estimations for the coefficients of determination.

A System for Mining Traversal Patterns from Web Log Files (웹 로그 화일에서 순회 패턴 탐사를 위한 시스템)

  • 박종수;윤지영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10a
    • /
    • pp.4-6
    • /
    • 2001
  • In this paper, we designed a system that can mine user's traversal patterns from web log files. The system cleans an input data, transactions of a web log file, and finds traversal patterns from the transactions, each of which consists of one user's access pages. The resulting traversal patterns are shown on a web browser, which can be used to analyze the patterns in visual form by a system manager or data miner. We have implemented the system in an IBM personal computer running on Windows 2000 in MS visual C++, and used the MS SQL Server 2000 to store the intermediate files and the traversal patterns which can be easily applied to a system for knowledge discovery in databases.

  • PDF

Creep Life Prediction of Pure Ti by Monkman-Grant Method (Monkman-Grant법에 의한 순수 Ti의 크리프 수명예측)

  • Won, Bo-Youp;Jeong, Soon-Uk
    • Proceedings of the KSME Conference
    • /
    • 2003.04a
    • /
    • pp.352-357
    • /
    • 2003
  • Creep tests for Titan were carned out using constant-load at $600^{\circ}C$, $650^{\circ}C$ and $700^{\circ}C$. Material constants necessary to predict creep life were acquired from the experimental creep data. And the applicability of Monkman-Grant(M-G) and modified M-G relationships was discussed. It was discovered the log-log plot of M-G relationships between the rupure time(tr) and he minimum creep rate(${\varepsilon}_m$) was conditional on test temperatures. The slop of m was 2.75 at $600^{\circ}C$ and m was 1.92 at $700^{\circ}C$. However; the log-log plot of modified M-G relationships between $t_r/\varepsilon_r$ and $\varepsilon_m$ was indpendent on stresses and temperatures. That is the slop of m' was almost 3.90 in all the data. Thus, change M-G relationships to creep life prediction could be vtilized more reasonably than that of M-G relationships for type Titan. It was divided that the constant slopes never theless of temperatures of practical stresses in the modified relationship were due to an intergranular break grown by wedge-type cauities.

  • PDF

Bayesian and maximum likelihood estimations from exponentiated log-logistic distribution based on progressive type-II censoring under balanced loss functions

  • Chung, Younshik;Oh, Yeongju
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.425-445
    • /
    • 2021
  • A generalization of the log-logistic (LL) distribution called exponentiated log-logistic (ELL) distribution on lines of exponentiated Weibull distribution is considered. In this paper, based on progressive type-II censored samples, we have derived the maximum likelihood estimators and Bayes estimators for three parameters, the survival function and hazard function of the ELL distribution. Then, under the balanced squared error loss (BSEL) and the balanced linex loss (BLEL) functions, their corresponding Bayes estimators are obtained using Lindley's approximation (see Jung and Chung, 2018; Lindley, 1980), Tierney-Kadane approximation (see Tierney and Kadane, 1986) and Markov Chain Monte Carlo methods (see Hastings, 1970; Gelfand and Smith, 1990). Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic (see Gelman and Rubin, 1992; Brooks and Gelman, 1997) was used. On the basis of their risks, the performances of their Bayes estimators are compared with maximum likelihood estimators in the simulation studies. In this paper, research supports the conclusion that ELL distribution is an efficient distribution to modeling data in the analysis of survival data. On top of that, Bayes estimators under various loss functions are useful for many estimation problems.

HBase based Business Process Event Log Schema Design of Hadoop Framework

  • Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.49-55
    • /
    • 2019
  • Organizations design and operate business process models to achieve their goals efficiently and systematically. With the advancement of IT technology, the number of items that computer systems can participate in and the process becomes huge and complicated. This phenomenon created a more complex and subdivide flow of business process.The process instances that contain workcase and events are larger and have more data. This is an essential resource for process mining and is used directly in model discovery, analysis, and improvement of processes. This event log is getting bigger and broader, which leads to problems such as capacity management and I / O load in management of existing row level program or management through a relational database. In this paper, as the event log becomes big data, we have found the problem of management limit based on the existing original file or relational database. Design and apply schemes to archive and analyze large event logs through Hadoop, an open source distributed file system, and HBase, a NoSQL database system.

Exploring Factors that Affect the Usage of KMS : Using Log Data Analysis (KMS 활성화에 영향을 미치는 요인에 관한 연구 : 로그 데이터 분석을 이용하여)

  • Baek, Seung-Ik;Lim, Gyoo-Gun;Lee, Dae-Chul;Lee, Jin-Suk
    • Knowledge Management Research
    • /
    • v.9 no.3
    • /
    • pp.21-42
    • /
    • 2008
  • As many companies have recognized the importance of Knowledge Management(KM), they have invested lots of their resources in developing and deploying Knowledge Management Systems(KMS) to organize and share knowledge. When they implemented KMS in their organizations, most of them had high expectations about KMS at tile beginning. However, as time passed, its usage was rapidly declined. There have been many attempts to increase its usage. Many research works have tried to find solutions from users and organizations viewpoints, instead of the actual usage data itself. In order to assess the usage level of KMS, they have normally utilized user's attitudes toward KMS by assuming that user attitudes have strong relationship with actual uses of KMS. The purpose of this study is to assess tile impacts of user, organizational, and job characteristics on the satisfaction and the usage levels of KMS. Unlike other studies, this study is to explore impact factors which affect the usage level of KMS in organizations by using actual KMS log data as well as user's attitudes.

  • PDF

The Analysis of Individual Learning Status on Web-Based Instruction (웹기반 교육에서 학습자별 학습현황 분석에 관한 연구)

  • Shin, Ji-Yeun;Jeong, Ok-Ran;Cho, Dong-Sub
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.2
    • /
    • pp.107-120
    • /
    • 2003
  • In Web Based Instruction, as evaluation of learning process means individual student's learning activity, it demands data on learning time, pattern, participation, environment in a specific learning contents. The purpose of this paper is to reflect analysis results of individual student's learning status in achievement evaluation using the most suitable web log mining to settle evaluation problem of learning process, an issue in web based instruction. The contents and results of this study are as following. First, conformity item for learning status analysis is determined and web log data preprocessing is executed. Second, on the basis of web log data, I construct student's database and analyze learning status using data mining techniques.

  • PDF

Analysis of Web Log for e-CRM on B2B of the Make-To-Order Company (수주생산기업 B2B에서 e-CRM을 위한 웹 로그 분석)

  • Go, Jae-Moon;Seo, Jun-Yong;Kim, Woon-Sik
    • IE interfaces
    • /
    • v.18 no.2
    • /
    • pp.205-220
    • /
    • 2005
  • This study presents a web log analysis model for e-CRM, which combines the on-line customer's purchasing pattern data and transaction data between companies in B2B environment of make-to-order company. With this study, the customer evaluation and the customer subdivision are available. We can forecast the estimate demands with periodical products sales records. Also, the purchasing rate per each product, the purchasing intention rate, and the purchasing rate per companies can be used as the basic data for the strategy for receiving the orders in future. These measures are used to evaluate the business strategy, the quality ability on products, the customer's demands, the benefits of customer and the customer's loyalty. And it is used to evaluate the customer's purchasing patterns, the response analysis, the customer's secession rate, the earning rate, and the customer's needs. With this, we can satisfy various customers' demands, therefore, we can multiply the company's benefits. And we presents case of the 'H' company, which has the make-to-order manufacture environment, in order to verify the effect of the proposal system.

Personalized Product Recommendation Method for Analyzing User Behavior Using DeepFM

  • Xu, Jianqiang;Hu, Zhujiao;Zou, Junzhong
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.369-384
    • /
    • 2021
  • In a personalized product recommendation system, when the amount of log data is large or sparse, the accuracy of model recommendation will be greatly affected. To solve this problem, a personalized product recommendation method using deep factorization machine (DeepFM) to analyze user behavior is proposed. Firstly, the K-means clustering algorithm is used to cluster the original log data from the perspective of similarity to reduce the data dimension. Then, through the DeepFM parameter sharing strategy, the relationship between low- and high-order feature combinations is learned from log data, and the click rate prediction model is constructed. Finally, based on the predicted click-through rate, products are recommended to users in sequence and fed back. The area under the curve (AUC) and Logloss of the proposed method are 0.8834 and 0.0253, respectively, on the Criteo dataset, and 0.7836 and 0.0348 on the KDD2012 Cup dataset, respectively. Compared with other newer recommendation methods, the proposed method can achieve better recommendation effect.