• Title/Summary/Keyword: Pattern mining

Search Result 624, Processing Time 0.031 seconds

A Study on Behavior Rule Induction Method of Web User Group using 2-tier Clustering (2-계층 클러스터링을 사용한 웹 사용자 그룹의 행동규칙추출방법에 관한 연구)

  • Hwang, Jun-Won;Song, Doo-Heon;Lee, Chang-Hoon
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.139-146
    • /
    • 2008
  • It is very important to identify useful web user group and induce their behavior pattern in eCRM domain. Inducing user group with a similar inclination, a reliability of user group decreases because there is an uncertainty in online user data. In this paper, we have applied the 2-tier clustering, which uses the outcome of interaction with data from other tiers. Also we propose a method which induces user behavior pattern from a cluster and compare C4.5 with our method.

QP-DTW: Upgrading Dynamic Time Warping to Handle Quasi Periodic Time Series Alignment

  • Boulnemour, Imen;Boucheham, Bachir
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.851-876
    • /
    • 2018
  • Dynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for quasi-periodic time series. In the current situation, except the recently published the shape exchange algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time series containing different number of periods each), according to the recent classification of time series alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery, classification, etc.).

Experimental and numerical investigation of the effect of sample shapes on point load index

  • Haeri, Hadi;Sarfarazi, Vahab;Shemirani, Alireza Bagher;Hosseini, Seyed Shahin
    • Geomechanics and Engineering
    • /
    • v.13 no.6
    • /
    • pp.1045-1055
    • /
    • 2017
  • Tensile strength is considered key properties for characterizing rock material in engineering project. It is determined by direct and indirect methods. Point load test is a useful testing method to estimate the tensile strengths of rocks. In this paper, the effects of rock shape on the point load index of gypsum are investigated by PFC2D simulation. For PFC simulating, initially calibration of PFC was performed with respect to the Brazilian experimental data to ensure the conformity of the simulated numerical models response. In second step, nineteen models with different shape were prepared and tested under point load test. According to the obtained results, as the size of the models increases, the point load strength index increases. It is also found that the shape of particles has no major effect on its tensile strength. Our findings show that the dominant failure pattern for numerical models is breaking the model into two pieces. Also a criterion was rendered numerically for determination of tensile strength of gypsum. The proposed criteria were cross checked with the results of experimental point load test.

Simulation of the tensile behaviour of layered anisotropy rocks consisting internal notch

  • Sarfarazi, Vahab;Haeri, Hadi;Ebneabbasi, P.;Bagheri, Kourosh
    • Structural Engineering and Mechanics
    • /
    • v.69 no.1
    • /
    • pp.51-67
    • /
    • 2019
  • In this paper, the anisotropy of tensile behaviours of layered rocks consisting internal notch has been investigated using particle flow code. For this purpose, firstly calibration of PFC2D was performed using Brazilian tensile strength. Secondly Brazilian test models consisting bedding layer was simulated numerically. Thickness of layers was 10 mm and layered angularity was $90^{\circ}$, $75^{\circ}$, $60^{\circ}$, $45^{\circ}$, $30^{\circ}$, $15^{\circ}$ and $0^{\circ}$. The strength of bedding interface was too high. Each model was consisted of one internal notch. Notch length is 1 cm, 2 cm and 4 cm and notch angularities are $60^{\circ}$, $45^{\circ}$, $30^{\circ}$, $15^{\circ}$ and $0^{\circ}$. Totally, 90 model were tested. The results show that failure pattern was affected by notch orientation and notch length. It's to be noted that layer angle has not any effect on the failure pattern. Also, Brazilian tensile strength is affected by notch orientation and notch length.

Anomaly Detection Method Based on The False-Positive Control (과탐지를 제어하는 이상행위 탐지 방법)

  • 조혁현;정희택;김민수;노봉남
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.4
    • /
    • pp.151-159
    • /
    • 2003
  • Internet as being generalized, intrusion detection system is needed to protect computer system from intrusions synthetically. We propose an intrusion detection method to identify and control the contradiction on self-explanation that happen at profiling process of anomaly detection methodology. Because many patterns can be created on profiling process with association method, we present effective application plan through clustering for rules. Finally, we propose similarity function to decide whether anomaly action or not for user pattern using clustered pattern database.

Sentiment Dictionary Construction Based on Reason-Sentiment Pattern Using Korean Syntax Analysis (한국어 구문분석을 활용한 이유-감성 패턴 기반의 감성사전 구축)

  • Woo Hyun Kim;Heejung Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.142-151
    • /
    • 2023
  • Sentiment analysis is a method used to comprehend feelings, opinions, and attitudes in text, and it is essential for evaluating consumer feedback and social media posts. However, creating sentiment dictionaries, which are necessary for this analysis, is complex and time-consuming because people express their emotions differently depending on the context and domain. In this study, we propose a new method for simplifying this procedure. We utilize syntax analysis of the Korean language to identify and extract sentiment words based on the Reason-Sentiment Pattern, which distinguishes between words expressing feelings and words explaining why those feelings are expressed, making it applicable in various contexts and domains. We also define sentiment words as those with clear polarity, even when used independently and exclude words whose polarity varies with context and domain. This approach enables the extraction of explicit sentiment expressions, enhancing the accuracy of sentiment analysis at the attribute level. Our methodology, validated using Korean cosmetics review datasets from Korean online shopping malls, demonstrates how a sentiment dictionary focused solely on clear polarity words can provide valuable insights for product planners. Understanding the polarity and reasons behind specific attributes enables improvement of product weaknesses and emphasis on strengths. This approach not only reduces dependency on extensive sentiment dictionaries but also offers high accuracy and applicability across various domains.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Advanced Improvement for Frequent Pattern Mining using Bit-Clustering (비트 클러스터링을 이용한 빈발 패턴 탐사의 성능 개선 방안)

  • Kim, Eui-Chan;Kim, Kye-Hyun;Lee, Chul-Yong;Park, Eun-Ji
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.1
    • /
    • pp.105-115
    • /
    • 2007
  • Data mining extracts interesting knowledge from a large database. Among numerous data mining techniques, research work is primarily concentrated on clustering and association rules. The clustering technique of the active research topics mainly deals with analyzing spatial and attribute data. And, the technique of association rules deals with identifying frequent patterns. There was an advanced apriori algorithm using an existing bit-clustering algorithm. In an effort to identify an alternative algorithm to improve apriori, we investigated FP-Growth and discussed the possibility of adopting bit-clustering as the alternative method to solve the problems with FP-Growth. FP-Growth using bit-clustering demonstrated better performance than the existing method. We used chess data in our experiments. Chess data were used in the pattern mining evaluation. We made a creation of FP-Tree with different minimum support values. In the case of high minimum support values, similar results that the existing techniques demonstrated were obtained. In other cases, however, the performance of the technique proposed in this paper showed better results in comparison with the existing technique. As a result, the technique proposed in this paper was considered to lead to higher performance. In addition, the method to apply bit-clustering to GML data was proposed.

  • PDF

Text-mining Techniques for Metabolic Pathway Reconstruction (대사경로 재구축을 위한 텍스트 마이닝 기법)

  • Kwon, Hyuk-Ryul;Na, Jong-Hwa;Yoo, Jae-Soo;Cho, Wan-Sup
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.138-147
    • /
    • 2007
  • Metabolic pathway is a series of chemical reactions occuning within a cell and can be used for drug development and understanding of life phenomenon. Many biologists are trying to extract metabolic pathway information from huge literatures for their metabolic-circuit regulation study. We propose a text-mining technique based on the keyword and pattern. Proposed technique utilizes a web robot to collect huge papers and stores them into a local database. We use gene ontology to increase compound recognition rate and NCBI Tokenizer library to recognize useful information without compound destruction. Furthermore, we obtain useful sentence patterns representing metabolic pathway from papers and KEGG database. We have extracted 66 patterns in 20,000 documents for Glycosphingolipid species from KEGG, a representative metabolic database. We verify our system for nineteen compounds in Glycosphingolipid species. The result shows that the recall is 95.1%, the precision 96.3%, and the processing time 15 seconds. Proposed text mining system is expected to be used for metabolic pathway reconstruction.

  • PDF

Utilizing the Effect of Market Basket Size for Improving the Practicality of Association Rule Measures (연관규칙 흥미성 척도의 실용성 향상을 위한 장바구니 크기 효과 반영 방안)

  • Kim, Won-Seo;Jeong, Seung-Ryul;Kim, Nam-Gyu
    • The KIPS Transactions:PartD
    • /
    • v.17D no.1
    • /
    • pp.1-8
    • /
    • 2010
  • Association rule mining techniques enable us to acquire knowledge concerning sales patterns among individual items from voluminous transactional data. Certainly, one of the major purposes of association rule mining is utilizing the acquired knowledge to provide marketing strategies such as catalogue design, cross-selling and shop allocation. However, this requires too much time and high cost to only extract the actionable and profitable knowledge from tremendous numbers of discovered patterns. In currently available literature, a number of interest measures have been devised to accelerate and systematize the process of pattern evaluation. Unfortunately, most of such measures, including support and confidence, are prone to yielding impractical results because they are calculated only from the sales frequencies of items. For instance, traditional measures cannot differentiate between the purchases in a small basket and those in a large shopping cart. Therefore, some adjustment should be made to the size of market baskets because there is a strong possibility that mutually irrelevant items could appear together in a large shopping cart. Contrary to the previous approaches, we attempted to consider market basket's size in calculating interest measures. Because the devised measure assigns different weights to individual purchases according to their basket sizes, we expect that the measure can minimize distortion of results caused by accidental patterns. Additionally, we performed intensive computer simulations under various environments, and we performed real case analyses to analyze the correctness and consistency of the devised measure.