• Title/Summary/Keyword: Log Clustering

Search Result 73, Processing Time 0.035 seconds

Game-bot detection based on Clustering of asset-varied location coordinates (자산변동 좌표 클러스터링 기반 게임봇 탐지)

  • Song, Hyun Min;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1131-1141
    • /
    • 2015
  • In this paper, we proposed a new approach of machine learning based method for detecting game-bots from normal players in MMORPG by inspecting the player's action log data especially in-game money increasing/decreasing event log data. DBSCAN (Density Based Spatial Clustering of Applications with Noise), an one of density based clustering algorithms, is used to extract the attributes of spatial characteristics of each players such as a number of clusters, a ratio of core points, member points and noise points. Most of all, even game-bot developers know principles of this detection system, they cannot avoid the system because moving a wide area to hunt the monster is very inefficient and unproductive. As the result, game-bots show definite differences from normal players in spatial characteristics such as very low ratio, less than 5%, of noise points while normal player's ratio of noise points is high. In experiments on real action log data of MMORPG, our game-bot detection system shows a good performance with high game-bot detection accuracy.

A MapReduce-Based Workflow BIG-Log Clustering Technique (맵리듀스기반 워크플로우 빅-로그 클러스터링 기법)

  • Jin, Min-Hyuck;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.87-96
    • /
    • 2019
  • In this paper, we propose a MapReduce-supported clustering technique for collecting and classifying distributed workflow enactment event logs as a preprocessing tool. Especially, we would call the distributed workflow enactment event logs as Workflow BIG-Logs, because they are satisfied with as well as well-fitted to the 5V properties of BIG-Data like Volume, Velocity, Variety, Veracity and Value. The clustering technique we develop in this paper is intentionally devised for the preprocessing phase of a specific workflow process mining and analysis algorithm based upon the workflow BIG-Logs. In other words, It uses the Map-Reduce framework as a Workflow BIG-Logs processing platform, it supports the IEEE XES standard data format, and it is eventually dedicated for the preprocessing phase of the ${\rho}$-Algorithm that is a typical workflow process mining algorithm based on the structured information control nets. More precisely, The Workflow BIG-Logs can be classified into two types: of activity-based clustering patterns and performer-based clustering patterns, and we try to implement an activity-based clustering pattern algorithm based upon the Map-Reduce framework. Finally, we try to verify the proposed clustering technique by carrying out an experimental study on the workflow enactment event log dataset released by the BPI Challenges.

Anomalous Pattern Analysis of Large-Scale Logs with Spark Cluster Environment

  • Sion Min;Youyang Kim;Byungchul Tak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.127-136
    • /
    • 2024
  • This study explores the correlation between system anomalies and large-scale logs within the Spark cluster environment. While research on anomaly detection using logs is growing, there remains a limitation in adequately leveraging logs from various components of the cluster and considering the relationship between anomalies and the system. Therefore, this paper analyzes the distribution of normal and abnormal logs and explores the potential for anomaly detection based on the occurrence of log templates. By employing Hadoop and Spark, normal and abnormal log data are generated, and through t-SNE and K-means clustering, templates of abnormal logs in anomalous situations are identified to comprehend anomalies. Ultimately, unique log templates occurring only during abnormal situations are identified, thereby presenting the potential for anomaly detection.

A Clustering Algorithm Considering Structural Relationships of Web Contents

  • Kang Hyuncheol;Han Sang-Tae;Sun Young-Su
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.1
    • /
    • pp.191-197
    • /
    • 2005
  • Application of data mining techniques to the world wide web, referred to as web mining, has been the focus of several recent researches. With the explosive growth of information sources available on the world wide web, it has become increasingly necessary to track and analyze their usage patterns. In this study, we introduce a process of pre-processing and cluster analysis on web log data and suggest a distance measure considering the structural relationships between web contents. Also, we illustrate some real examples of cluster analysis for web log data and look into practical application of web usage mining for eCRM.

User modeling based on fuzzy category and interest for web usage mining

  • Lee, Si-Hun;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.88-93
    • /
    • 2005
  • Web usage mining is a research field for searching potentially useful and valuable information from web log file. Web log file is a simple list of pages that users refer. Therefore, it is not easy to analyze user's current interest field from web log file. This paper presents web usage mining method for finding users' current interest based on fuzzy categories. We consider not only how many times a user visits pages but also when he visits. We describe a user's current interest with a fuzzy interest degree to categories. Based on fuzzy categories and fuzzy interest degrees, we also propose a method to cluster users according to their interests for user modeling. For user clustering, we define a category vector space. Experiments show that our method properly reflects the time factor of users' web visiting as well as the users' visit number.

Design of a machine learning based mobile application with GPS, mobile sensors, public GIS: real time prediction on personal daily routes

  • Shin, Hyunkyung
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.27-39
    • /
    • 2018
  • Since the global positioning system (GPS) has been included in mobile devices (e.g., for car navigation, in smartphones, and in smart watches), the impact of personal GPS log data on daily life has been unprecedented. For example, such log data have been used to solve public problems, such as mass transit traffic patterns, finding optimum travelers' routes, and determining prospective business zones. However, a real-time analysis technique for GPS log data has been unattainable due to theoretical limitations. We introduced a machine learning model in order to resolve the limitation. In this paper presents a new, three-stage real-time prediction model for a person's daily route activity. In the first stage, a machine learning-based clustering algorithm is adopted for place detection. The training data set was a personal GPS tracking history. In the second stage, prediction of a new person's transient mode is studied. In the third stage, to represent the person's activity on those daily routes, inference rules are applied.

Screening and Clustering for Time-course Yeast Microarray Gene Expression Data using Gaussian Process Regression (효모 마이크로어레이 유전자 발현데이터에 대한 가우시안 과정 회귀를 이용한 유전자 선별 및 군집화)

  • Kim, Jaehee;Kim, Taehoun
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.389-399
    • /
    • 2013
  • This article introduces Gaussian process regression and shows its application with time-course microarray gene expression data. Gene screening for yeast cell cycle microarray expression data is accomplished with a ratio of log marginal likelihood that uses Gaussian process regression with a squared exponential covariance kernel function. Gaussian process regression fitting with each gene is done and shown with the nine top ranking genes. With the screened data the Gaussian model-based clustering is done and its silhouette values are calculated for cluster validity.

Sparse Web Data Analysis Using MCMC Missing Value Imputation and PCA Plot-based SOM (MCMC 결측치 대체와 주성분 산점도 기반의 SOM을 이용한 희소한 웹 데이터 분석)

  • Jun, Sung-Hae;Oh, Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.277-282
    • /
    • 2003
  • The knowledge discovery from web has been studied in many researches. There are some difficulties using web log for training data on efficient information predictive models. In this paper, we studied on the method to eliminate sparseness from web log data and to perform web user clustering. Using missing value imputation by Bayesian inference of MCMC, the sparseness of web data is removed. And web user clustering is performed using self organizing maps based on 3-D plot by principal component. Finally, using KDD Cup data, our experimental results were shown the problem solving process and the performance evaluation.

Prediction and visualization of CYP2D6 genotype-based phenotype using clustering algorithms

  • Kim, Eun-Young;Shin, Sang-Goo;Shin, Jae-Gook
    • Translational and Clinical Pharmacology
    • /
    • v.25 no.3
    • /
    • pp.147-152
    • /
    • 2017
  • This study focused on the role of cytochrome P450 2D6 (CYP2D6) genotypes to predict phenotypes in the metabolism of dextromethorphan. CYP2D6 genotypes and metabolic ratios (MRs) of dextromethorphan were determined in 201 Koreans. Unsupervised clustering algorithms, hierarchical and k-means clustering analysis, and color visualizations of CYP2D6 activity were performed on a subset of 130 subjects. A total of 23 different genotypes were identified, five of which were observed in one subject. Phenotype classifications were based on the means, medians, and standard deviations of the log MR values for each genotype. Color visualization was used to display the mean and median of each genotype as different color intensities. Cutoff values were determined using receiver operating characteristic curves from the k-means analysis, and the data were validated in the remaining subset of 71 subjects. Using the two highest silhouette values, the selected numbers of clusters were three (the best) and four. The findings from the two clustering algorithms were similar to those of other studies, classifying $^*5/^*5$ as a lowest activity group and genotypes containing duplicated alleles (i.e., $CYP2D6^*1/^*2N$) as a highest activity group. The validation of the k-means clustering results with data from the 71 subjects revealed relatively high concordance rates: 92.8% and 73.9% in three and four clusters, respectively. Additionally, color visualization allowed for rapid interpretation of results. Although the clustering approach to predict CYP2D6 phenotype from CYP2D6 genotype is not fully complete, it provides general information about the genotype to phenotype relationship, including rare genotypes with only one subject.

A technique to support the personalized learning based on the log data of piano chords practicing (피아노 코드 연습 데이터를 활용한 맞춤형 학습 지원)

  • Woosung, Jung;Eunjoo, Lee;Suah, Choe
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.191-201
    • /
    • 2023
  • As Edutech arises which is integrating IT technology into education, many related attempts have been tried on music education area. The focus has been shifted from the teachers to the learners, and this makes the personalized learning emerge. The learner's proficiency is an essential factor to support the personalized learning. The chord fingering is an important technique in piano learning. In this paper, a personalized learning tool for piano chords has been suggested. And then, several utilization ways have been described by analyzing the chords patterns. Specifically, the difficulty of the chords and the proficiency of the learner are derived from the accumulated practicing log data of the users. More effective learning way of the chords has been presented through hierarchical clustering based on chords similarity. Furthermore, the suggested approach where only the practicing log data are used lessens the learner's burden to measure the proficiency and the chord's difficulty without additional efforts like taking tests.