• Title/Summary/Keyword: Log data

Search Result 2,131, Processing Time 0.036 seconds

Likelihood based inference for the ratio of parameters in two Maxwell distributions (두 개의 맥스웰분포의 모수비에 대한 우도함수 추론)

  • Kang, Sang-Gil;Lee, Jeong-Hee;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.1
    • /
    • pp.89-98
    • /
    • 2012
  • In this paper, the ratio of parameters in two independent Maxwell distributions is parameter of interest. We proposed test statistics, which converge to standard normal distribution, based on likelihood function. The exact distribution for testing the ratio is hard to obtain. We proposed the signed log-likelihood ratio statistic and the modified signed log-likelihood ratio statistic for testing the ratio. Through simulation, we show that the modified signed log-likelihood ratio statistic converges faster than signed log-likelihood ratio statistic to standard normal distribution. We compare two statistics in terms of type I error and power. We give an example using real data.

Design of a machine learning based mobile application with GPS, mobile sensors, public GIS: real time prediction on personal daily routes

  • Shin, Hyunkyung
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.27-39
    • /
    • 2018
  • Since the global positioning system (GPS) has been included in mobile devices (e.g., for car navigation, in smartphones, and in smart watches), the impact of personal GPS log data on daily life has been unprecedented. For example, such log data have been used to solve public problems, such as mass transit traffic patterns, finding optimum travelers' routes, and determining prospective business zones. However, a real-time analysis technique for GPS log data has been unattainable due to theoretical limitations. We introduced a machine learning model in order to resolve the limitation. In this paper presents a new, three-stage real-time prediction model for a person's daily route activity. In the first stage, a machine learning-based clustering algorithm is adopted for place detection. The training data set was a personal GPS tracking history. In the second stage, prediction of a new person's transient mode is studied. In the third stage, to represent the person's activity on those daily routes, inference rules are applied.

On Sample Size Calculation in Bioequivalence Trials

  • Kang, Seung-Ho
    • Proceedings of the PSK Conference
    • /
    • 2003.04a
    • /
    • pp.117.2-118
    • /
    • 2003
  • Sample size calculations plays an important role in a bioequivalence trials and is determined by considering power under the alternative hypothesis. The regulatory guideline recommends that $2{\times}2$ crossover design is conducted and raw data is log-transformed for statistical analysis. In this paper, we discuss the sample size calculation in $2{\times}2$ crossover design with the log-transformed data.

  • PDF

Web Navigation Mining by Integrating Web Usage Data and Hyperlink Structures (웹 사용 데이타와 하이퍼링크 구조를 통합한 웹 네비게이션 마이닝)

  • Gu Heummo;Choi Joongmin
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.416-427
    • /
    • 2005
  • Web navigation mining is a method of discovering Web navigation patterns by analyzing the Web access log data. However, it is admitted that the log data contains noisy information that leads to the incorrect recognition of user navigation path on the Web's hyperlink structure. As a result, previous Web navigation mining systems that exploited solely the log data have not shown good performance in discovering correct Web navigation patterns efficiently, mainly due to the complex pre-processing procedure. To resolve this problem, this paper proposes a technique of amalgamating the Web's hyperlink structure information with the Web access log data to discover navigation patterns correctly and efficiently. Our implemented Web navigation mining system called SPMiner produces a WebTree from the hyperlink structure of a Web site that is used trl eliminate the possible noises in the Web log data caused by the user's abnormal navigational activities. SPMiner remarkably reduces the pre-processing overhead by using the structure of the Web, and as a result, it could analyze the user's search patterns efficiently.

An Approach to Survey Data with Nonresponse: Evaluation of KEPEC Data with BMI (무응답이 있는 설문조사연구의 접근법 : 한국노인약물역학코호트 자료의 평가)

  • Baek, Ji-Eun;Kang, Wee-Chang;Lee, Young-Jo;Park, Byung-Joo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.35 no.2
    • /
    • pp.136-140
    • /
    • 2002
  • Objectives : A common problem with analyzing survey data involves incomplete data with either a nonresponse or missing data. The mail questionnaire survey conducted for collecting lifestyle variables on the members of the Korean Elderly Phamacoepidemiologic Cohort(KEPEC) in 1996 contains some nonresponse or missing data. The proper statistical method was applied to evaluate the missing pattern of a specific KEPEC data, which had no missing data in the independent variable and missing data in the response variable, BMI. Methods : The number of study subjects was 8,689 elderly people. Initially, the BMI and significant variables that influenced the BMI were categorized. After fitting the log-linear model, the probabilities of the people on each category were estimated. The EM algorithm was implemented using a log-linear model to determine the missing mechanism causing the nonresponse. Results : Age, smoking status, and a preference of spicy hot food were chosen as variables that influenced the BMI. As a result of fitting the nonignorable and ignorable nonresponse log-linear model considering these variables, the difference in the deviance in these two models was 0.0034(df=1). Conclusion : There is a lot of risk if an inference regarding the variables and large samples is made without considering the pattern of missing data. On the basis of these results, the missing data occurring in the BMI is the ignorable nonresponse. Therefore, when analyzing the BMI in KEPEC data, the inference can be made about the data without considering the missing data.

An Optimization Technique for Smart-Walk Systems Using Big Stream Log Data (Smart-Walk 시스템에서 스트림 빅데이터 분석을 통한 최적화 기법)

  • Cho, Wan-Sup;Yang, Kyung-Eun;Lee, Joong-Yeub
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.3
    • /
    • pp.105-114
    • /
    • 2012
  • Various RFID-based smart-walk systems have been developed for guiding disabled people. The system sends appropriate message whenever the disabled people arrived at a specific point. We propose universal design concept and optimization techniques for the smart-walk systems. Universal design concept can be adopted for supporting various kinds of disabled such as a blind person, a hearing-impaired person, or a foreigner in a system. It can be supported by storing appropriate messages set in the message database table depending on the kinds of the disabled. System optimization can be done by analyzing operational log(stream) data accumulated in the system. Useful information can be extracted by analyzing or mining the accumulated operational log data. We show various analysis results from the operational log data.

Comparison of Parametric and Bootstrap Method in Bioequivalence Test

  • Ahn, Byung-Jin;Yim, Dong-Seok
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.13 no.5
    • /
    • pp.367-371
    • /
    • 2009
  • The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled data sets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

Development of integrated management solution through log analysis based on Big Data (빅데이터기반의 로그분석을 통한 통합 관리 솔루션 개발)

  • Kang, Sun-Kyoung;Lee, Hyun-Chang;Shin, Seong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.541-542
    • /
    • 2017
  • In this paper, we intend to develop an integrated management solution that can be easily operated by integrating complex and various cloud environments. This has the advantage that users and administrators can conveniently solve problems by collecting and analyzing fixed log data and unstructured log data based on big data and realizing integrated monitoring in real time. Hypervisor log pattern analysis technology will be able to manage existing complex and various cloud environment more efficiently.

  • PDF

A Text Mining-based Intrusion Log Recommendation in Digital Forensics (디지털 포렌식에서 텍스트 마이닝 기반 침입 흔적 로그 추천)

  • Ko, Sujeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.6
    • /
    • pp.279-290
    • /
    • 2013
  • In digital forensics log files have been stored as a form of large data for the purpose of tracing users' past behaviors. It is difficult for investigators to manually analysis the large log data without clues. In this paper, we propose a text mining technique for extracting intrusion logs from a large log set to recommend reliable evidences to investigators. In the training stage, the proposed method extracts intrusion association words from a training log set by using Apriori algorithm after preprocessing and the probability of intrusion for association words are computed by combining support and confidence. Robinson's method of computing confidences for filtering spam mails is applied to extracting intrusion logs in the proposed method. As the results, the association word knowledge base is constructed by including the weights of the probability of intrusion for association words to improve the accuracy. In the test stage, the probability of intrusion logs and the probability of normal logs in a test log set are computed by Fisher's inverse chi-square classification algorithm based on the association word knowledge base respectively and intrusion logs are extracted from combining the results. Then, the intrusion logs are recommended to investigators. The proposed method uses a training method of clearly analyzing the meaning of data from an unstructured large log data. As the results, it complements the problem of reduction in accuracy caused by data ambiguity. In addition, the proposed method recommends intrusion logs by using Fisher's inverse chi-square classification algorithm. So, it reduces the rate of false positive(FP) and decreases in laborious effort to extract evidences manually.

Research on Data Acquisition Strategy and Its Application in Web Usage Mining (웹 사용 마이닝에서의 데이터 수집 전략과 그 응용에 관한 연구)

  • Ran, Cong-Lin;Joung, Suck-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.231-241
    • /
    • 2019
  • Web Usage Mining (WUM) is one part of Web mining and also the application of data mining technique. Web mining technology is used to identify and analyze user's access patterns by using web server log data generated by web users when users access web site. So first of all, it is important that the data should be acquired in a reasonable way before applying data mining techniques to discover user access patterns from web log. The main task of data acquisition is to efficiently obtain users' detailed click behavior in the process of users' visiting Web site. This paper mainly focuses on data acquisition stage before the first stage of web usage mining data process with activities like data acquisition strategy and field extraction algorithm. Field extraction algorithm performs the process of separating fields from the single line of the log files, and they are also well used in practical application for a large amount of user data.