• Title/Summary/Keyword: Frequent Itemset Mining

Search Result 23, Processing Time 0.025 seconds

PPFP(Push and Pop Frequent Pattern Mining): A Novel Frequent Pattern Mining Method for Bigdata Frequent Pattern Mining (PPFP(Push and Pop Frequent Pattern Mining): 빅데이터 패턴 분석을 위한 새로운 빈발 패턴 마이닝 방법)

  • Lee, Jung-Hun;Min, Youn-A
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.12
    • /
    • pp.623-634
    • /
    • 2016
  • Most of existing frequent pattern mining methods address time efficiency and greatly rely on the primary memory. However, in the era of big data, the size of real-world databases to mined is exponentially increasing, and hence the primary memory is not sufficient enough to mine for frequent patterns from large real-world data sets. To solve this problem, there are some researches for frequent pattern mining method based on disk, but the processing time compared to the memory based methods took very time consuming. There are some researches to improve scalability of frequent pattern mining, but their processes are very time consuming compare to the memory based methods. In this paper, we present PPFP as a novel disk-based approach for mining frequent itemset from big data; and hence we reduced the main memory size bottleneck. PPFP algorithm is based on FP-growth method which is one of the most popular and efficient frequent pattern mining approaches. The mining with PPFP consists of two setps. (1) Constructing an IFP-tree: After construct FP-tree, we assign index number for each node in FP-tree with novel index numbering method, and then insert the indexed FP-tree (IFP-tree) into disk as IFP-table. (2) Mining frequent patterns with PPFP: Mine frequent patterns by expending patterns using stack based PUSH-POP method (PPFP method). Through this new approach, by using a very small amount of memory for recursive and time consuming operation in mining process, we improved the scalability and time efficiency of the frequent pattern mining. And the reported test results demonstrate them.

Finding Frequent Itemsets based on Open Data Mining in Data Streams (데이터 스트림에서 개방 데이터 마이닝 기반의 빈발항목 탐색)

  • Chang, Joong-Hyuk;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.447-458
    • /
    • 2003
  • The basic assumption of conventional data mining methodology is that the data set of a knowledge discovery process should be fixed and available before the process can proceed. Consequently, this assumption is valid only when the static knowledge embedded in a specific data set is the target of data mining. In addition, a conventional data mining method requires considerable computing time to produce the result of mining from a large data set. Due to these reasons, it is almost impossible to apply the mining method to a realtime analysis task in a data stream where a new transaction is continuously generated and the up-to-dated result of data mining including the newly generated transaction is needed as quickly as possible. In this paper, a new mining concept, open data mining in a data stream, is proposed for this purpose. In open data mining, whenever each transaction is newly generated, the updated mining result of whole transactions including the newly generated transactions is obtained instantly. In order to implement this mechanism efficiently, it is necessary to incorporate the delayed-insertion of newly identified information in recent transactions as well as the pruning of insignificant information in the mining result of past transactions. The proposed algorithm is analyzed through a series of experiments in order to identify the various characteristics of the proposed algorithm.

Detecting Red-Flag Bidding Patterns in Low-Bid Procurement for Highway Projects with Pattern Mining

  • Le, Chau;Nguyen, Trang;Le, Tuyen
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.11-17
    • /
    • 2022
  • Design-bid-build (DBB) is the most common project delivery method among highway projects. State Highway Agencies (SHAs) usually apply a low-bid approach to select contractors for their DBB projects. In this approach, the Federal Highway Agency suggests SHAs heighten contractors' competition to lower bid prices. However, these attempts may become ineffective due to collusive bidding arrangements among certain contractors. One common strategy is the rotation of winning bidders of a group of contractors who bid on many of the same projects. These arrangements may also be specific to a particular region or vary in time. Despite the practices' adverse effects on bidding outcomes, an effective model to detect red-flag bidding patterns is lacking. This study fills the gap by proposing a novel framework that utilizes pattern mining techniques and statistical tests for unusual pattern detection. A case study with historical data from an SHA is conducted to illustrate the proposed framework.

  • PDF

Frequent Closed Itemset Mining by Using a Space Compression and Efficient Search Technique (공간 압축 및 효율적 탐사 기법을 이용한 빈발 폐쇄 항목집합 마이닝)

  • 박귀정;한영우;이수원
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.392-394
    • /
    • 2003
  • 연관 규칙 마이닝은 일반적으로 않은 빈발항목집합과 연관 규칙을 생성하며, 생성된 연관 규칙은 상호 포함관계에 있거나 중복되는 경우가 많다. 이는 효과적인 마이닝 뿐 아니라 마이닝의 활용 효용성을 떨어뜨린다. 이를 해결하기 위하여 연관 규칙 마이닝과 동일한 성능을 가지며 생성되는 규칙의 수를 줄일 수 있는 빈발 폐쇄 항목집합 마이닝이 제안되었다. 본 연구에서는 연관규칙 마이닝 방법 중 가장 우수한 성능을 가지는 ARCS 알고리즘을 개선한 빈발 폐쇄 항목집단 마이닝을 제안한다.

  • PDF

Probabilistic Models for Local Patterns Analysis

  • Salim, Khiat;Hafida, Belbachir;Ahmed, Rahal Sid
    • Journal of Information Processing Systems
    • /
    • v.10 no.1
    • /
    • pp.145-161
    • /
    • 2014
  • Recently, many large organizations have multiple data sources (MDS') distributed over different branches of an interstate company. Local patterns analysis has become an effective strategy for MDS mining in national and international organizations. It consists of mining different datasets in order to obtain frequent patterns, which are forwarded to a centralized place for global pattern analysis. Various synthesizing models [2,3,4,5,6,7,8,26] have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results (i.e., the results that would be obtained if all of the databases are put together and mining has been done). When the pattern is present in the site, but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore, this process can lose some interesting patterns, which can help the decider to make the right decision. In such situations we propose the application of a probabilistic model in the synthesizing process. An adequate choice for a probabilistic model can improve the quality of patterns that have been discovered. In this paper, we perform a comprehensive study on various probabilistic models that can be applied in the synthesizing process and we choose and improve one of them that works to ameliorate the synthesizing results. Finally, some experiments are presented in public database in order to improve the efficiency of our proposed synthesizing method.

Memory Improvement Method for Extraction of Frequent Patterns in DataBase (데이터베이스에서 빈발패턴의 추출을 위한 메모리 향상기법)

  • Park, In-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.127-133
    • /
    • 2019
  • Since frequent item extraction so far requires searching for patterns and traversal for the FP-Tree, it is more likely to store the mining data in a tree and thus CPU time is required for its searching. In order to overcome these drawbacks, in this paper, we provide each item with its location identification of transaction data without relying on conditional FP-Tree and convert transaction data into 2-dimensional position information look-up table, resulting in the facilitation of time and spatial accessibility. We propose an algorithm that considers the mapping scheme between the location of items and items that guarantees the linear time complexity. Experimental results show that the proposed method can reduce many execution time and memory usage based on the data set obtained from the FIMI repository website.

Analysis of Graph Mining based on Free-Tree (자유트리 기반의 그래프마이닝 기법 분석)

  • YoungSang No;Unil Yun;Keun Ho Ryu;Myung Jun Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.275-278
    • /
    • 2008
  • Recently, there are many research of datamining. On the transaction dataset, association rules is made by finding of interesting patterns. A part of mining, sub-structure mining is increased in interest of and applied to many high technology. But graph mining has more computing time then itemset mining. Therefore, that need efficient way for avoid duplication. GASTON is best algorithm of duplication free. This paper analyze GASTON and expect the future work.

Efficient Association Rule Mining based SON Algorithm for a Bigdata Platform (빅데이터 플랫폼을 위한 SON알고리즘 기반의 효과적인 연관 룰 마이닝)

  • Nguyen, Giang-Truong;Nguyen, Van-Quyet;Nguyen, Sinh-Ngoc;Kim, Kyungbaek
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1593-1601
    • /
    • 2017
  • In a big data platform, association rule mining applications could bring some benefits. For instance, in a agricultural big data platform, the association rule mining application could recommend specific products for farmers to grow, which could increase income. The key process of the association rule mining is the frequent itemsets mining, which finds sets of products accompanying together frequently. Former researches about this issue, e.g. Apriori, are not satisfying enough because huge possible sets can cause memory to be overloaded. In order to deal with it, SON algorithm has been proposed, which divides the considered set into many smaller ones and handles them sequently. But in a single machine, SON algorithm cause heavy time consuming. In this paper, we present a method to find association rules in our Hadoop based big data platform, by parallelling SON algorithm. The entire process of association rule mining including pre-processing, SON algorithm based frequent itemset mining, and association rule finding is implemented on Hadoop based big data platform. Through the experiment with real dataset, it is conformed that the proposed method outperforms a brute force method.

Partition Algorithm for Updating Discovered Association Rules in Data Mining (데이터마이닝에서 기존의 연관규칙을 갱신하는 분할 알고리즘)

  • 이종섭;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.54
    • /
    • pp.1-11
    • /
    • 2000
  • This study suggests the partition algorithm for updating the discovered association rules in large database, because a database may allow frequent or occasional updates, and such update may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. the Partition algorithm updates strong association rules efficiently in the whole update database reuseing the information of the old large itemsets. Partition algorithms that is suggested in this study scans an incremental database in view of the fact that it is difficult to find the new set of large itemset in the whole updated database after an incremental database is added to the original database. This method of generating large itemsets is different from that of FUP(Fast Update) and KDP(Kim Dong Pil)

  • PDF

Real-time Network Traffic Monitoring using Frequent Itemset Mining (빈발항목 탐색 기법을 이용한 실시간 네트워크 트래픽 모니터링 방법)

  • Lee, Jae-Woo;Lee, Won-Suk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.193-196
    • /
    • 2008
  • 네트워크 인프라가 급속히 발전하면서 네트워크 상에서 발생되는 트래픽을 관리하기 위해 마이닝 기법을 적용하려는 여러 연구가 활발히 진행되고 있다. 그러나 기존의 방법들은 DBMS를 이용하여 개개의 플로우를 저장 후 분석하는 방식을 채택함으로써 엄청난 부하와 실시간 마이닝을 어렵게 하는 문제점이 있다. 본 논문에서는 제한된 크기의 메모리를 사용하여 실시간으로 발생하는 네트워크 플로우 데이터 중 빈발한 플로우를 추출하는 방법을 제안한다. 오직 빈발하게 발생하는 플오우만을 메모리에서 모니터링 트리를 사용하여 관리함으로써 메모리를 효율적으로 사용한다. 제안 된 방법은 기존의 방법들과 비교할 때 적은 시스템 부하를 주면서 초고대역폭의 트래픽을 실시간으로 모니터링 할 수 있다.