• Title/Summary/Keyword: large database

Search Result 1,430, Processing Time 0.028 seconds

An Efficient Algorithm for Updating Discovered Association Rules in Data Mining (데이터 마이닝에서 기존의 연관규칙을 갱신하는 효율적인 앨고리듬)

  • 김동필;지영근;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.45
    • /
    • pp.121-133
    • /
    • 1998
  • This study suggests an efficient algorithm for updating discovered association rules in large database, because a database may allow frequent or occasional updates, and such updates may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. FUP and DMI update efficiently strong association rules in the whole updated database reusing the information of the old large item-sets. Moreover, these algorithms use a pruning technique for reducing the database size in the update process. This study updates strong association rules efficiently in the whole updated database reusing the information of the old large item-sets. An updating algorithm that is suggested in this study generates the whole candidate item-sets at once in an incremental database in view of the fact that it is difficult to find the new set of large item-sets in the whole updated database after an incremental database is added to the original database. This method of generating candidate item-sets is different from that of FUP and DMI. After generating the whole candidate item-sets, if each item-set in the whole candidate item-sets is large at an incremental database, the original database is scanned and the support of each item-set in the whole candidate item-sets is updated. So, the whole large item-sets in the whole updated database is found out. An updating algorithm that is suggested in this study does not use a pruning technique for reducing the database size in the update process. As a result, an updating algoritm that is suggested updates fast and efficiently discovered large item-sets.

  • PDF

Partition Algorithm for Updating Discovered Association Rules in Data Mining (데이터마이닝에서 기존의 연관규칙을 갱신하는 분할 알고리즘)

  • 이종섭;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.54
    • /
    • pp.1-11
    • /
    • 2000
  • This study suggests the partition algorithm for updating the discovered association rules in large database, because a database may allow frequent or occasional updates, and such update may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. the Partition algorithm updates strong association rules efficiently in the whole update database reuseing the information of the old large itemsets. Partition algorithms that is suggested in this study scans an incremental database in view of the fact that it is difficult to find the new set of large itemset in the whole updated database after an incremental database is added to the original database. This method of generating large itemsets is different from that of FUP(Fast Update) and KDP(Kim Dong Pil)

  • PDF

Frequent Patterns Mining using only one-time Database Scan (한 번의 데이터베이스 탐색에 의한 빈발항목집합 탐색)

  • Chai, Duck-Jin;Jin, Long;Lee, Yong-Mi;Hwang, Bu-Hyun;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.15-22
    • /
    • 2008
  • In this paper, we propose an efficient algorithm using only one-time database scan. The proposed algorithm creates the bipartite graph which indicates relationship of large items and transactions including the large items. And then we can find large itemsets using the bipartite graph. The bipartite graph is generated when database is scanned to find large items. We can't easily find transactions which include large items in the large database. In the bipartite graph, large items and transactions are linked each other. So, we can trace the transactions which include large items through the link information. Therefore the bipartite graph is a indexed database which indicates inclusion relationship of large items and transactions. We can fast find large itemsets because proposed method conducts only one-time database scan and scans indexed the bipartite graph. Also, it don't generate candidate itemsets.

An Evaluation of Access Performance of STEP-based CAD Database (STEP 기반 CAD 데이터베이스의 액세스 성능 평가 실험)

  • Kim, Junh-Wan;Han, Soon-Hung
    • IE interfaces
    • /
    • v.17 no.2
    • /
    • pp.226-232
    • /
    • 2004
  • In shipbuilding area, data sharing is one of the crucial issues. Recently, for collaborative design, ship structural CAD systems adopt the database as its primary storage. Database is useful to deal with the large amount of design information among the heterogeneous design department and design stage. To make the database-based CAD system object-oriented database(OODB) and object-relational database(ORDB) can be used. It is important to select proper database because the CAD system performance mainly depends on access performance of database. In this research, using prototype CAD system from other research, access performance of OODB and ORDB form CAD system was evaluated. STEP application protocol was used as the database schema and experiment was made in query by property and query by region. The results give some idea of how to choose the database for CAD systems.

An Efficient Face Recognition using Feature Filter and Subspace Projection Method

  • Lee, Minkyu;Choi, Jaesung;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.2
    • /
    • pp.64-66
    • /
    • 2015
  • Purpose : In this paper we proposed cascade feature filter and projection method for rapid human face recognition for the large-scale high-dimensional face database. Materials and Methods : The relevant features are selected from the large feature set using Fast Correlation-Based Filter method. After feature selection, project them into discriminant using Principal Component Analysis or Linear Discriminant Analysis. Their cascade method reduces the time-complexity without significant degradation of the performance. Results : In our experiments, the ORL database and the extended Yale face database b were used for evaluation. On the ORL database, the processing time was approximately 30-times faster than typical approach with recognition rate 94.22% and on the extended Yale face database b, the processing time was approximately 300-times faster than typical approach with recognition rate 98.74 %. Conclusion : The recognition rate and time-complexity of the proposed method is suitable for real-time face recognition system on the large-scale high-dimensional face database.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.11
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

GOMS: Large-scale ontology management system using graph databases

  • Lee, Chun-Hee;Kang, Dong-oh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.780-793
    • /
    • 2022
  • Large-scale ontology management is one of the main issues when using ontology data practically. Although many approaches have been proposed in relational database management systems (RDBMSs) or object-oriented DBMSs (OODBMSs) to develop large-scale ontology management systems, they have several limitations because ontology data structures are intrinsically different from traditional data structures in RDBMSs or OODBMSs. In addition, users have difficulty using ontology data because many terminologies (ontology nodes) in large-scale ontology data match with a given string keyword. Therefore, in this study, we propose a (graph database-based ontology management system (GOMS) to efficiently manage large-scale ontology data. GOMS uses a graph DBMS and provides new query templates to help users find key concepts or instances. Furthermore, to run queries with multiple joins and path conditions efficiently, we propose GOMS encoding as a filtering tool and develop hash-based join processing algorithms in the graph DBMS. Finally, we experimentally show that GOMS can process various types of queries efficiently.

Maintaining Integrity Constraints in Spatiotemporal Databases

  • Moon Kyung Do;Woo SungKu;Kim ByungCheol;Ryu KeunHo
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.726-729
    • /
    • 2004
  • Spatiotemporal phenomena are ubiquitous aspects of real world. In the spatial and temporal databases, integrity constraints maintain the semantics of specific application domain and relationship between domains when proceed update in the database. Efficient maintenance of data integrity has become a critical problem, since testing the validity of a large number of constraints in a large database and after each transaction is an expensive task. Especially, in spatiotemporal domain, data is more complex than traditional domains and very active. Additionally, it is not considered that unified frameworks deal with both spatial and temporal properties to handle integrity constraints. Therefore, there need a model to maintain integrity constraints in the unified frameworks and enforcement and management techniques in order to preserve consistence.

  • PDF

Development of Practical Data Mining Methods for Database Summarization

  • Lee, Do-Heon
    • The Journal of Information Technology and Database
    • /
    • v.4 no.2
    • /
    • pp.33-45
    • /
    • 1998
  • Database summarization is the procedure to obtain generalized and representative descriptions expressing the content of a large amount of database at a glance. We present a top-down summary refinement procedure to discover database summaries. The procedure exploits attribute concept hierarchies that represent ISA relationships among domain concepts. It begins with the most generalized summary and proceeds to find more specialized ones by stepwise refinements. This top-down paradigm reveals at least two important advantages compared to the previous bottom-up methods. Firstly, it provides a natural way of reflecting the user's own discovery preference interactively. Secondly, it does not produce too large intermediate result that makes it hard for the bottom-up approach to be applied in practical environment. The proposed procedure can also be easily extended for distributed databases. Information content measure of a database summary is derived in order to identify more informative summaries among the discovered results.

A Method for Distributed Database Processing with Optimized Communication Cost in Dataflow model (데이터플로우 모델에서 통신비용 최적화를 이용한 분산 데이터베이스 처리 방법)

  • Jun, Byung-Uk
    • Journal of Internet Computing and Services
    • /
    • v.8 no.1
    • /
    • pp.133-142
    • /
    • 2007
  • Large database processing is one of the most important technique in the information society, Since most large database is regionally distributed, the distributed database processing has been brought into relief. Communications and data compressions are the basic technologies for large database processing. In order to maximize those technologies, the execution time for the task, the size of data, and communication time between processors should be considered. In this paper, the dataflow scheme and vertically layered allocation algorithm have been used to optimize the distributed large database processing. The basic concept of this method is rearrangement of processes considering the communication time between processors. The paper also introduces measurement model of the execution time, the size of output data, and the communication time in order to implement the proposed scheme.

  • PDF