• Title/Summary/Keyword: Log data

Search Result 2,131, Processing Time 0.031 seconds

Development of User Interface and Blog based on Probabilistic Model for Life Log Sharing and Management (라이프 로그 공유 및 관리를 위한 확률모델 기반 사용자 인터폐이스 및 블로그 개발)

  • Lee, Jin-Hyung;Noh, Hyun-Yong;Oh, Se-Won;Hwang, Keum-Sung;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.5
    • /
    • pp.380-384
    • /
    • 2009
  • The log data collected on a mobile device contain diverse and continuous information about the user. From the log data, the location, pictures, running functions and services of the user can be obtained. It has interested in the research inferring the contexts and understanding the everyday-life of mobile users. In this paper, we have studied the methods for real-time collection of log data from mobile devices, analysis of the data, map based visualization and effective management of the personal everyday-life information. We have developed an application for sharing the contexts. The proposed application infers the personal contexts with Bayesian network probabilistic model. In the experiments, we confirm that the usability of visualization and information sharing functions based on the real world log data.

Count-Min HyperLogLog : Cardinality Estimation Algorithm for Big Network Data (Count-Min HyperLogLog : 네트워크 빅데이터를 위한 카디널리티 추정 알고리즘)

  • Sinjung Kang;DaeHun Nyang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.427-435
    • /
    • 2023
  • Cardinality estimation is used in wide range of applications and a fundamental problem processing a large range of data. While the internet moves into the era of big data, the function addressing cardinality estimation use only on-chip cache memory. To use memory efficiently, there have been various methods proposed. However, because of the noises between estimator, which is data structure per flow, loss of accuracy occurs in these algorithms. In this paper, we focus on minimizing noises. We propose multiple data structure that each estimator has the number of estimated value as many as the number of structures and choose the minimum value, which is one with minimum noises, We discover that the proposed algorithm achieves better performance than the best existing work using the same tight memory, such as 1 bit per flow, through experiment.

Development of Database System for Ground Exploration (지반조사자료 정보화 시스템 구축)

  • 우철웅;장병욱
    • Proceedings of the Korean Society of Agricultural Engineers Conference
    • /
    • 1998.10a
    • /
    • pp.395-400
    • /
    • 1998
  • This paper presents a geotechnical information system(HGIS) developed for Korea Highway Corporation. To make a database for boring information, characterization study of boring data is carried out. Based on tile study, the HGIS database is developed as a relational database. The HGIS database consists of 23 tables including 13 reference tables. For an effective database management, a boring log program called GEOLOG is developed. GEOLOG can print boring log and store those data to tile HGIS database.

  • PDF

Nonparametric Inference for Accelerated Life Testing (가속화 수명 실험에서의 비모수적 추론)

  • Kim Tai Kyoo
    • Journal of Korean Society for Quality Management
    • /
    • v.32 no.4
    • /
    • pp.242-251
    • /
    • 2004
  • Several statistical methods are introduced 1=o analyze the accelerated failure time data. Most frequently used method is the log-linear approach with parametric assumption. Since the accelerated failure time experiments are exposed to many environmental restrictions, parametric log-linear relationship might not be working properly to analyze the resulting data. The models proposed by Buckley and James(1979) and Stute(1993) could be useful in the situation where parametric log-linear method could not be applicable. Those methods are introduced in accelerated experimental situation under the thermal acceleration and discussed through an illustrated example.

A Study of Web Usage Mining for eCRM

  • Hyuncheol Kang;Jung, Byoung-Cheol
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.831-840
    • /
    • 2001
  • In this study, We introduce the process of web usage mining, which has lately attracted considerable attention with the fast diffusion of world wide web, and explain the web log data, which Is the main subject of web usage mining. Also, we illustrate some real examples of analysis for web log data and look into practical application of web usage mining for eCRM.

  • PDF

Log Analysis System Design using RTMA

  • Park, Hee-Chang;Myung, Ho-Min
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2004.04a
    • /
    • pp.225-236
    • /
    • 2004
  • Every web server comprises a repository of all actions and events that occur on the server. Server logs can be used to quantify user traffic. Intelligent analysis of this data provides a statistical baseline that can be used to determine server load, failed requests and other events that throw light on site usage patterns. This information provides valuable leads on marketing and site management activities. In this paper, we propose a method of design for log analysis system using RTMA(realtime monitoring and analysis) technique.

  • PDF

On the Estimation in Regression Models with Multiplicative Errors

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.193-198
    • /
    • 1999
  • The estimation of parameters in regression models with multiplicative errors is usually based on the gamma or log-normal likelihoods. Under reciprocal misspecification, we compare the small sample efficiencies of two sets of estimators via a Monte Carlo study. We further consider the case where the errors are a random sample from a Weibull distribution. We compute the asymptotic relative efficiency of quasi-likelihood estimators on the original scale to least squares estimators on the log-transformed scale and perform a Monte Carlo study to compare the small sample performances of quasi-likelihood and least squares estimators.

  • PDF

Designing Summary Tables for Mining Web Log Data

  • Ahn, Jeong-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.1
    • /
    • pp.157-163
    • /
    • 2005
  • In the Web, the data is generally gathered automatically by Web servers and collected in server or access logs. However, as users access larger and larger amounts of data, query response times to extract information inevitably get slower. A method to resolve this issue is the use of summary tables. In this short note, we design a prototype of summary tables that can efficiently extract information from Web log data. We also present the relative performance of the summary tables against a sampling technique and a method that uses raw data.

  • PDF

A New Analysis Method of the Consolidation Test Data for an Undisturbed Clay (불교란 점토 압밀시험 결과의 새로운 해석법)

  • 박종화;고우모또타쯔야
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.44 no.6
    • /
    • pp.106-114
    • /
    • 2002
  • In this study, the results of a series of consolidation test for undisturbed Ariake clay in Japan were analyzed by three methods, e-log p (e: void ratio, p: consolidation pressure), log e-log p and n-log p (n: porosity). Moreover, the characteristics of each analysis method were studied. For undisturbed Ariake clay, the log o-Log p and the n-log p relationships can be found as two groups of straight lines of different gradients, but both the elastic consolidation and plastic consolidation regions of e-log p relationship are expressed as a curve. In this paper, the porosity of consolidation yield n$\_$y/, consolidation yield stress p$\_$y/, and the gradient of the plastic consolidation region C$\_$p/ were represented by the log e-log p method, and n$\_$c/, P$\_$cn/ and C$\_$cn/ were represented by the n-log p method. The meaning and the relationships of each value were studied, and the interrelationships among compression indices i.e. C$\_$cn/, C$\_$p/ and C$\_$c/ are obtained from each analysis method as a function of initial porosity n$\_$0/.

Disjunctive Process Patterns Refinement and Probability Extraction from Workflow Logs

  • Kim, Kyoungsook;Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • In this paper, we extract the quantitative relation data of activities from the workflow event log file recorded in the XES standard format and connect them to rediscover the workflow process model. Extract the workflow process patterns and proportions with the rediscovered model. There are four types of control-flow elements that should be used to extract workflow process patterns and portions with log files: linear (sequential) routing, disjunctive (selective) routing, conjunctive (parallel) routing, and iterative routing patterns. In this paper, we focus on four of the factors, disjunctive routing, and conjunctive path. A framework implemented by the authors' research group extracts and arranges the activity data from the log and converts the iteration of duplicate relationships into a quantitative value. Also, for accurate analysis, a parallel process is recorded in the log file based on execution time, and algorithms for finding and eliminating information distortion are designed and implemented. With these refined data, we rediscover the workflow process model following the relationship between the activities. This series of experiments are conducted using the Large Bank Transaction Process Model provided by 4TU and visualizes the experiment process and results.