• Title/Summary/Keyword: 연관규칙마이닝

Search Result 287, Processing Time 0.03 seconds

An Efficient Algorithm for Updating Discovered Association Rules in Data Mining (데이터 마이닝에서 기존의 연관규칙을 갱신하는 효율적인 앨고리듬)

  • 김동필;지영근;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.45
    • /
    • pp.121-133
    • /
    • 1998
  • This study suggests an efficient algorithm for updating discovered association rules in large database, because a database may allow frequent or occasional updates, and such updates may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. FUP and DMI update efficiently strong association rules in the whole updated database reusing the information of the old large item-sets. Moreover, these algorithms use a pruning technique for reducing the database size in the update process. This study updates strong association rules efficiently in the whole updated database reusing the information of the old large item-sets. An updating algorithm that is suggested in this study generates the whole candidate item-sets at once in an incremental database in view of the fact that it is difficult to find the new set of large item-sets in the whole updated database after an incremental database is added to the original database. This method of generating candidate item-sets is different from that of FUP and DMI. After generating the whole candidate item-sets, if each item-set in the whole candidate item-sets is large at an incremental database, the original database is scanned and the support of each item-set in the whole candidate item-sets is updated. So, the whole large item-sets in the whole updated database is found out. An updating algorithm that is suggested in this study does not use a pruning technique for reducing the database size in the update process. As a result, an updating algoritm that is suggested updates fast and efficiently discovered large item-sets.

  • PDF

Automated Conceptual Data Modeling Using Association Rule Mining (연관규칙 마이닝을 활용한 개념적 데이터베이스 설계 자동화 기법)

  • Son, Yoon-Ho;Kim, In-Kyu;Kim, Nam-Gyu
    • The Journal of Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-86
    • /
    • 2009
  • Data modeling can be regarded as a series of processes to abstract real-world business concerns. The conceptual modeling phase is often regarded as the most difficult stage in the entire modeling process, because quite different conceptual models may be produced even for similar business domains based on users' varying requirements and the data modelers' diverse perceptions of the requirements. This implies that an object considered as an entity in one domain may be considered as an attribute in another, and vice versa. However, many traditional knowledge-based automated database design systems unfortunately fail to construct appropriate Entity-Relationship Diagrams(ERDs) for a given set of requirements due to the rigid assumption that an object should be classified as an entity if it has been classified as an entity in previous applications. In this paper, we propose an alternative automation system which can generate ERDs from business descriptions using association rule mining technique. Our system can be differentiated from the traditional ones in that our system can perform data modeling only based on business description written by domain workers, rather than relying on any kind of knowledge base. Since the proposed system can produce various versions of ERDs from the same business descriptions simultaneously, users can have the opportunity to choose one of the ERDs as being the most appropriate, based on their business environment and requirements. We performed a case study for personnel management in a university to evaluate the practicability of the proposed system This paper summarizes the result of it in the experiment section.

Negative Selection Algorithm based Multi-Level Anomaly Intrusion Detection for False-Positive Reduction (과탐지 감소를 위한 NSA 기반의 다중 레벨 이상 침입 탐지)

  • Kim, Mi-Sun;Park, Kyung-Woo;Seo, Jae-Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.16 no.6
    • /
    • pp.111-121
    • /
    • 2006
  • As Internet lastly grows, network attack techniques are transformed and new attack types are appearing. The existing network-based intrusion detection systems detect well known attack, but the false-positive or false-negative against unknown attack is appearing high. In addition, The existing network-based intrusion detection systems is difficult to real time detection against a large network pack data in the network and to response and recognition against new attack type. Therefore, it requires method to heighten the detection rate about a various large dataset and to reduce the false-positive. In this paper, we propose method to reduce the false-positive using multi-level detection algorithm, that is combine the multidimensional Apriori algorithm and the modified Negative Selection algorithm. And we apply this algorithm in intrusion detection and, to be sure, it has a good performance.

Group-wise Keyword Extraction of the External Audit using Text Mining and Association Rules (텍스트마이닝과 연관규칙을 이용한 외부감사 실시내용의 그룹별 핵심어 추출)

  • Seong, Yoonseok;Lee, Donghee;Jung, Uk
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.1
    • /
    • pp.77-89
    • /
    • 2022
  • Purpose: In order to improve the audit quality of a company, an in-depth analysis is required to categorize the audit report in the form of a text document containing the details of the external audit. This study introduces a systematic methodology to extract keywords for each group that determines the differences between groups such as 'audit plan' and 'interim audit' using audit reports collected in the form of text documents. Methods: The first step of the proposed methodology is to preprocess the document through text mining. In the second step, the documents are classified into groups using machine learning techniques and based on this, important vocabularies that have a dominant influence on the performance of classification are extracted. In the third step, the association rules for each group's documents are found. In the last step, the final keywords for each group representing the characteristics of each group are extracted by comparing the important vocabulary for classification with the important vocabulary representing the association rules of each group. Results: This study quantitatively calculates the importance value of the vocabulary used in the audit report based on machine learning rather than the qualitative research method such as the existing literature search, expert evaluation, and Delphi technique. From the case study of this study, it was found that the extracted keywords describe the characteristics of each group well. Conclusion: This study is meaningful in that it has laid the foundation for quantitatively conducting follow-up studies related to key vocabulary in each stage of auditing.

Trends Analysis on Research Articles of the Sharing Economy through a Meta Study Based on Big Data Analytics (빅데이터 분석 기반의 메타스터디를 통해 본 공유경제에 대한 학술연구 동향 분석)

  • Kim, Ki-youn
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.97-107
    • /
    • 2020
  • This study aims to conduct a comprehensive meta-study from the perspective of content analysis to explore trends in Korean academic research on the sharing economy by using the big data analytics. Comprehensive meta-analysis methodology can examine the entire set of research results historically and wholly to illuminate the tendency or properties of the overall research trend. Academic research related to the sharing economy first appeared in the year in which Professor Lawrence Lessig introduced the concept of the sharing economy to the world in 2008, but research began in earnest in 2013. In particular, between 2006 and 2008, research improved dramatically. In order to grasp the overall flow of domestic academic research of trends, 8 years of papers from 2013 to the present have been selected as target analysis papers, focusing on titles, keywords, and abstracts using database of electronic journals. Big data analysis was performed in the order of cleaning, analysis, and visualization of the collected data to derive research trends and insights by year and type of literature. We used Python3.7 and Textom analysis tools for data preprocessing, text mining, and metrics frequency analysis for key word extraction, and N-gram chart, centrality and social network analysis and CONCOR clustering visualization based on UCINET6/NetDraw, Textom program, the keywords clustered into 8 groups were used to derive the typologies of each research trend. The outcomes of this study will provide useful theoretical insights and guideline to future studies.

IRFP-tree: Intersection Rule Based FP-tree (IRFP-tree(Intersection Rule Based FP-tree): 메모리 효율성을 향상시키기 위해 교집합 규칙 기반의 패러다임을 적용한 FP-tree)

  • Lee, Jung-Hun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.3
    • /
    • pp.155-164
    • /
    • 2016
  • For frequency pattern analysis of large databases, the new tree-based frequency pattern analysis algorithm which can compensate for the disadvantages of the Apriori method has been variously studied. In frequency pattern tree, the number of nodes is associated with memory allocation, but also affects memory resource consumption and processing speed of the growth. Therefore, reducing the number of nodes in the tree is very important in the frequency pattern mining. However, the absolute criteria which need to order the transaction items for construction frequency pattern tree has lowered the compression ratio of the tree nodes. But most of the frequency based tree construction methods adapted the absolute criteria. FP-tree is typically frequency pattern tree structure which is an extended prefix-tree structure for storing compressed frequent crucial information about frequent patterns. For construction the tree, all the frequent items in different transactions are sorted according to the absolute criteria, frequency descending order. CanTree also need to absolute criteria, canonical order, to construct the tree. In this paper, we proposed a novel frequency pattern tree construction method that does not use the absolute criteria, IRFP-tree algorithm. IRFP-tree(Intersection Rule based FP-tree). IRFP-tree is constituted with the new paradigm of the intersection rule without the use of the absolute criteria. It increased the compression ratio of the tree nodes, and reduced the tree construction time. Our method has the additional advantage that it provides incremental mining. The reported test result demonstrate the applicability and effectiveness of the proposed approach.

Classification and Analysis of Data Mining Algorithms (데이터마이닝 알고리즘의 분류 및 분석)

  • Lee, Jung-Won;Kim, Ho-Sook;Choi, Ji-Young;Kim, Hyon-Hee;Yong, Hwan-Seung;Lee, Sang-Ho;Park, Seung-Soo
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.279-300
    • /
    • 2001
  • Data mining plays an important role in knowledge discovery process and usually various existing algorithms are selected for the specific purpose of the mining. Currently, data mining techniques are actively to the statistics, business, electronic commerce, biology, and medical area and currently numerous algorithms are being researched and developed for these applications. However, in a long run, only a few algorithms, which are well-suited to specific applications with excellent performance in large database, will survive. So it is reasonable to focus our effort on those selected algorithms in the future. This paper classifies about 30 existing algorithms into 7 categories - association rule, clustering, neural network, decision tree, genetic algorithm, memory-based reasoning, and bayesian network. First of all, this work analyzes systematic hierarchy and characteristics of algorithms and we present 14 criteria for classifying the algorithms and the results based on this criteria. Finally, we propose the best algorithms among some comparable algorithms with different features and performances. The result of this paper can be used as a guideline for data mining researches as well as field applications of data mining.

  • PDF