• 제목/요약/키워드: Data log

검색결과 2,118건 처리시간 0.024초

라이프 로그 공유 및 관리를 위한 확률모델 기반 사용자 인터폐이스 및 블로그 개발 (Development of User Interface and Blog based on Probabilistic Model for Life Log Sharing and Management)

  • 이진형;노현용;오세원;황금성;조성배
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제15권5호
    • /
    • pp.380-384
    • /
    • 2009
  • 모바일 장치에서 수집되는 로그 데이터는 개인의 일상에 대한 다양하고 지속적인 정보를 담고 있다. 이로부터 얻을 수 있는 사용자의 위치, 사진, 사용중인 모바일장치 기능 및 서비스의 종류를 통하여 사용자의 상태를 추론하고 개인의 일상을 이해하는 연구가 많은 관심을 받고 있다. 본 논문에서는 모바일 장치로부터 실시간으로 로그 데이터를 수집, 분석하고 이를 지도기반으로 시각화하여 개인 일상에 대한 정보를 효과적으로 관리할 수 있도록 하며 다른 사용자와 이 정보를 공유하여 상호 사용 가능한 어플리케이션을 개발한다. 제안하는 어플리케이션은 베이지안 네트워크 확률 모델을 채택하여 개인의 상황을 추론한다. 실험에서는 실제로 수집된 로그 정보를 바탕으로 효율적 시각화 및 다른 사용자와의 정보 공유 기능의 유용성을 확인하였다.

Count-Min HyperLogLog : 네트워크 빅데이터를 위한 카디널리티 추정 알고리즘 (Count-Min HyperLogLog : Cardinality Estimation Algorithm for Big Network Data)

  • 강신정;양대헌
    • 정보보호학회논문지
    • /
    • 제33권3호
    • /
    • pp.427-435
    • /
    • 2023
  • 카디널리티 추정은 실생활의 많은 곳에서 사용되며, 큰 범위의 데이터를 처리하는 데 근본적 문제이다. 인터넷이 빅데이터의 시대로 넘어가며 데이터의 크기는 점점 커지고 있지만, 작은 온칩 캐시 메모리만을 이용하여 카디널리티 추정이 이뤄진다. 메모리를 효율적으로 사용하기 위해서, 지금까지 많은 방법이 제안되었다. 그러나, 이러한 알고리즘에서는 estimator 간의 노이즈 발생으로 인해 정확도가 떨어지는 일이 발생한다. 이 논문에서는 노이즈를 최소화하는데 중점을 뒀다. 우리는 여러 개의 데이터 구조를 제안하여 각 estimator가 데이터 구조 수만큼의 추정값을 가지고, 이 중 가장 작은 값을 선택하여 노이즈를 최소화한다. 실험을 통해 이 방법이 이전의 가장 좋은 방법과 비교했을 때, 플로우당 1 bit와 같은 작은 메모리를 사용하면서 더 좋은 성능을 보이는 것을 확인했다.

지반조사자료 정보화 시스템 구축 (Development of Database System for Ground Exploration)

  • 우철웅;장병욱
    • 한국농공학회:학술대회논문집
    • /
    • 한국농공학회 1998년도 학술발표회 발표논문집
    • /
    • pp.395-400
    • /
    • 1998
  • This paper presents a geotechnical information system(HGIS) developed for Korea Highway Corporation. To make a database for boring information, characterization study of boring data is carried out. Based on tile study, the HGIS database is developed as a relational database. The HGIS database consists of 23 tables including 13 reference tables. For an effective database management, a boring log program called GEOLOG is developed. GEOLOG can print boring log and store those data to tile HGIS database.

  • PDF

가속화 수명 실험에서의 비모수적 추론 (Nonparametric Inference for Accelerated Life Testing)

  • 김태규
    • 품질경영학회지
    • /
    • 제32권4호
    • /
    • pp.242-251
    • /
    • 2004
  • Several statistical methods are introduced 1=o analyze the accelerated failure time data. Most frequently used method is the log-linear approach with parametric assumption. Since the accelerated failure time experiments are exposed to many environmental restrictions, parametric log-linear relationship might not be working properly to analyze the resulting data. The models proposed by Buckley and James(1979) and Stute(1993) could be useful in the situation where parametric log-linear method could not be applicable. Those methods are introduced in accelerated experimental situation under the thermal acceleration and discussed through an illustrated example.

A Study of Web Usage Mining for eCRM

  • Hyuncheol Kang;Jung, Byoung-Cheol
    • Communications for Statistical Applications and Methods
    • /
    • 제8권3호
    • /
    • pp.831-840
    • /
    • 2001
  • In this study, We introduce the process of web usage mining, which has lately attracted considerable attention with the fast diffusion of world wide web, and explain the web log data, which Is the main subject of web usage mining. Also, we illustrate some real examples of analysis for web log data and look into practical application of web usage mining for eCRM.

  • PDF

Log Analysis System Design using RTMA

  • 박희창;명호민
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2004년도 춘계학술대회
    • /
    • pp.225-236
    • /
    • 2004
  • Every web server comprises a repository of all actions and events that occur on the server. Server logs can be used to quantify user traffic. Intelligent analysis of this data provides a statistical baseline that can be used to determine server load, failed requests and other events that throw light on site usage patterns. This information provides valuable leads on marketing and site management activities. In this paper, we propose a method of design for log analysis system using RTMA(realtime monitoring and analysis) technique.

  • PDF

On the Estimation in Regression Models with Multiplicative Errors

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제10권1호
    • /
    • pp.193-198
    • /
    • 1999
  • The estimation of parameters in regression models with multiplicative errors is usually based on the gamma or log-normal likelihoods. Under reciprocal misspecification, we compare the small sample efficiencies of two sets of estimators via a Monte Carlo study. We further consider the case where the errors are a random sample from a Weibull distribution. We compute the asymptotic relative efficiency of quasi-likelihood estimators on the original scale to least squares estimators on the log-transformed scale and perform a Monte Carlo study to compare the small sample performances of quasi-likelihood and least squares estimators.

  • PDF

Designing Summary Tables for Mining Web Log Data

  • Ahn, Jeong-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권1호
    • /
    • pp.157-163
    • /
    • 2005
  • In the Web, the data is generally gathered automatically by Web servers and collected in server or access logs. However, as users access larger and larger amounts of data, query response times to extract information inevitably get slower. A method to resolve this issue is the use of summary tables. In this short note, we design a prototype of summary tables that can efficiently extract information from Web log data. We also present the relative performance of the summary tables against a sampling technique and a method that uses raw data.

  • PDF

불교란 점토 압밀시험 결과의 새로운 해석법 (A New Analysis Method of the Consolidation Test Data for an Undisturbed Clay)

  • 박종화;고우모또타쯔야
    • 한국농공학회지
    • /
    • 제44권6호
    • /
    • pp.106-114
    • /
    • 2002
  • In this study, the results of a series of consolidation test for undisturbed Ariake clay in Japan were analyzed by three methods, e-log p (e: void ratio, p: consolidation pressure), log e-log p and n-log p (n: porosity). Moreover, the characteristics of each analysis method were studied. For undisturbed Ariake clay, the log o-Log p and the n-log p relationships can be found as two groups of straight lines of different gradients, but both the elastic consolidation and plastic consolidation regions of e-log p relationship are expressed as a curve. In this paper, the porosity of consolidation yield n$\_$y/, consolidation yield stress p$\_$y/, and the gradient of the plastic consolidation region C$\_$p/ were represented by the log e-log p method, and n$\_$c/, P$\_$cn/ and C$\_$cn/ were represented by the n-log p method. The meaning and the relationships of each value were studied, and the interrelationships among compression indices i.e. C$\_$cn/, C$\_$p/ and C$\_$c/ are obtained from each analysis method as a function of initial porosity n$\_$0/.

Disjunctive Process Patterns Refinement and Probability Extraction from Workflow Logs

  • Kim, Kyoungsook;Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • 인터넷정보학회논문지
    • /
    • 제20권3호
    • /
    • pp.85-92
    • /
    • 2019
  • In this paper, we extract the quantitative relation data of activities from the workflow event log file recorded in the XES standard format and connect them to rediscover the workflow process model. Extract the workflow process patterns and proportions with the rediscovered model. There are four types of control-flow elements that should be used to extract workflow process patterns and portions with log files: linear (sequential) routing, disjunctive (selective) routing, conjunctive (parallel) routing, and iterative routing patterns. In this paper, we focus on four of the factors, disjunctive routing, and conjunctive path. A framework implemented by the authors' research group extracts and arranges the activity data from the log and converts the iteration of duplicate relationships into a quantitative value. Also, for accurate analysis, a parallel process is recorded in the log file based on execution time, and algorithms for finding and eliminating information distortion are designed and implemented. With these refined data, we rediscover the workflow process model following the relationship between the activities. This series of experiments are conducted using the Large Bank Transaction Process Model provided by 4TU and visualizes the experiment process and results.