• 제목/요약/키워드: large database

검색결과 1,444건 처리시간 0.028초

데이터 마이닝에서 기존의 연관규칙을 갱신하는 효율적인 앨고리듬 (An Efficient Algorithm for Updating Discovered Association Rules in Data Mining)

  • 김동필;지영근;황종원;강맹규
    • 산업경영시스템학회지
    • /
    • 제21권45호
    • /
    • pp.121-133
    • /
    • 1998
  • This study suggests an efficient algorithm for updating discovered association rules in large database, because a database may allow frequent or occasional updates, and such updates may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. FUP and DMI update efficiently strong association rules in the whole updated database reusing the information of the old large item-sets. Moreover, these algorithms use a pruning technique for reducing the database size in the update process. This study updates strong association rules efficiently in the whole updated database reusing the information of the old large item-sets. An updating algorithm that is suggested in this study generates the whole candidate item-sets at once in an incremental database in view of the fact that it is difficult to find the new set of large item-sets in the whole updated database after an incremental database is added to the original database. This method of generating candidate item-sets is different from that of FUP and DMI. After generating the whole candidate item-sets, if each item-set in the whole candidate item-sets is large at an incremental database, the original database is scanned and the support of each item-set in the whole candidate item-sets is updated. So, the whole large item-sets in the whole updated database is found out. An updating algorithm that is suggested in this study does not use a pruning technique for reducing the database size in the update process. As a result, an updating algoritm that is suggested updates fast and efficiently discovered large item-sets.

  • PDF

데이터마이닝에서 기존의 연관규칙을 갱신하는 분할 알고리즘 (Partition Algorithm for Updating Discovered Association Rules in Data Mining)

  • 이종섭;황종원;강맹규
    • 산업경영시스템학회지
    • /
    • 제23권54호
    • /
    • pp.1-11
    • /
    • 2000
  • This study suggests the partition algorithm for updating the discovered association rules in large database, because a database may allow frequent or occasional updates, and such update may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. the Partition algorithm updates strong association rules efficiently in the whole update database reuseing the information of the old large itemsets. Partition algorithms that is suggested in this study scans an incremental database in view of the fact that it is difficult to find the new set of large itemset in the whole updated database after an incremental database is added to the original database. This method of generating large itemsets is different from that of FUP(Fast Update) and KDP(Kim Dong Pil)

  • PDF

한 번의 데이터베이스 탐색에 의한 빈발항목집합 탐색 (Frequent Patterns Mining using only one-time Database Scan)

  • 채덕진;김룡;이용미;황부현;류근호
    • 정보처리학회논문지D
    • /
    • 제15D권1호
    • /
    • pp.15-22
    • /
    • 2008
  • 본 논문에서는 한 번의 데이터베이스 스캔으로 빈발항목집합들을 생성할 수 있는 효율적인 알고리즘을 제안한다. 제안하는 알고리즘은 빈발 항목과 그 빈발항목을 포함하고 있는 트랜잭션과의 관계를 나타내는 이분할 그래프(bipartite graph)를 생성한다. 그리고 생성된 이분할 그래프를 이용하여 후보 항목집합들을 생성하지 않고 빈발 항목집합들을 추출할 수 있다. 이분할 그래프는 빈발항목들을 추출하기위해 대용량의 트랜잭션 데이터베이스를 스캔할 때 생성된다. 이분할 그래프는 빈발항목들과 그들이 속한 트랜잭션들 간의 관계를 엣지(edge)로 연결한 그래프이다. 즉, 본 논문에서의 이분할 그래프는 대용량의 데이터베이스에서 쉽게 발견할 수 없는 빈발항목과 트랜잭션의 관계를 검색하기 쉽게 색인(index)화한 그래프이다. 본 논문에서 제안하는 방법은 한 번의 데이터베이스 스캔만을 수행하고 후보 항목집합들을 생성하지 않기 때문에 기존의 방법들보다 빠른 시간에 빈발 항목집합들을 찾을 수 있다.

STEP 기반 CAD 데이터베이스의 액세스 성능 평가 실험 (An Evaluation of Access Performance of STEP-based CAD Database)

  • 김준환;한순홍
    • 산업공학
    • /
    • 제17권2호
    • /
    • pp.226-232
    • /
    • 2004
  • In shipbuilding area, data sharing is one of the crucial issues. Recently, for collaborative design, ship structural CAD systems adopt the database as its primary storage. Database is useful to deal with the large amount of design information among the heterogeneous design department and design stage. To make the database-based CAD system object-oriented database(OODB) and object-relational database(ORDB) can be used. It is important to select proper database because the CAD system performance mainly depends on access performance of database. In this research, using prototype CAD system from other research, access performance of OODB and ORDB form CAD system was evaluated. STEP application protocol was used as the database schema and experiment was made in query by property and query by region. The results give some idea of how to choose the database for CAD systems.

An Efficient Face Recognition using Feature Filter and Subspace Projection Method

  • Lee, Minkyu;Choi, Jaesung;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • 제2권2호
    • /
    • pp.64-66
    • /
    • 2015
  • Purpose : In this paper we proposed cascade feature filter and projection method for rapid human face recognition for the large-scale high-dimensional face database. Materials and Methods : The relevant features are selected from the large feature set using Fast Correlation-Based Filter method. After feature selection, project them into discriminant using Principal Component Analysis or Linear Discriminant Analysis. Their cascade method reduces the time-complexity without significant degradation of the performance. Results : In our experiments, the ORL database and the extended Yale face database b were used for evaluation. On the ORL database, the processing time was approximately 30-times faster than typical approach with recognition rate 94.22% and on the extended Yale face database b, the processing time was approximately 300-times faster than typical approach with recognition rate 98.74 %. Conclusion : The recognition rate and time-complexity of the proposed method is suitable for real-time face recognition system on the large-scale high-dimensional face database.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권11호
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

GOMS: Large-scale ontology management system using graph databases

  • Lee, Chun-Hee;Kang, Dong-oh
    • ETRI Journal
    • /
    • 제44권5호
    • /
    • pp.780-793
    • /
    • 2022
  • Large-scale ontology management is one of the main issues when using ontology data practically. Although many approaches have been proposed in relational database management systems (RDBMSs) or object-oriented DBMSs (OODBMSs) to develop large-scale ontology management systems, they have several limitations because ontology data structures are intrinsically different from traditional data structures in RDBMSs or OODBMSs. In addition, users have difficulty using ontology data because many terminologies (ontology nodes) in large-scale ontology data match with a given string keyword. Therefore, in this study, we propose a (graph database-based ontology management system (GOMS) to efficiently manage large-scale ontology data. GOMS uses a graph DBMS and provides new query templates to help users find key concepts or instances. Furthermore, to run queries with multiple joins and path conditions efficiently, we propose GOMS encoding as a filtering tool and develop hash-based join processing algorithms in the graph DBMS. Finally, we experimentally show that GOMS can process various types of queries efficiently.

Maintaining Integrity Constraints in Spatiotemporal Databases

  • Moon Kyung Do;Woo SungKu;Kim ByungCheol;Ryu KeunHo
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.726-729
    • /
    • 2004
  • Spatiotemporal phenomena are ubiquitous aspects of real world. In the spatial and temporal databases, integrity constraints maintain the semantics of specific application domain and relationship between domains when proceed update in the database. Efficient maintenance of data integrity has become a critical problem, since testing the validity of a large number of constraints in a large database and after each transaction is an expensive task. Especially, in spatiotemporal domain, data is more complex than traditional domains and very active. Additionally, it is not considered that unified frameworks deal with both spatial and temporal properties to handle integrity constraints. Therefore, there need a model to maintain integrity constraints in the unified frameworks and enforcement and management techniques in order to preserve consistence.

  • PDF

Development of Practical Data Mining Methods for Database Summarization

  • Lee, Do-Heon
    • 정보기술과데이타베이스저널
    • /
    • 제4권2호
    • /
    • pp.33-45
    • /
    • 1998
  • Database summarization is the procedure to obtain generalized and representative descriptions expressing the content of a large amount of database at a glance. We present a top-down summary refinement procedure to discover database summaries. The procedure exploits attribute concept hierarchies that represent ISA relationships among domain concepts. It begins with the most generalized summary and proceeds to find more specialized ones by stepwise refinements. This top-down paradigm reveals at least two important advantages compared to the previous bottom-up methods. Firstly, it provides a natural way of reflecting the user's own discovery preference interactively. Secondly, it does not produce too large intermediate result that makes it hard for the bottom-up approach to be applied in practical environment. The proposed procedure can also be easily extended for distributed databases. Information content measure of a database summary is derived in order to identify more informative summaries among the discovered results.

데이터플로우 모델에서 통신비용 최적화를 이용한 분산 데이터베이스 처리 방법 (A Method for Distributed Database Processing with Optimized Communication Cost in Dataflow model)

  • 전병욱
    • 인터넷정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.133-142
    • /
    • 2007
  • 대용량 데이터베이스의 처리 기술은 오늘날과 같은 정보 사회에서 가장 중요한 기술 중 하나이다. 이 대용량의 정보들은 지역적으로 분산되어 있어 분산처리의 중요성을 더욱 부각시키고 있다. 전송 기술과 데이터 압축 기술의 발전은 대용량 데이터베이스의 처리 속도를 높히기 위한 필수 기술이다. 그러나 이 기술들의 효과를 극대화하기 위하여 각각의 task에서 필요한 실행시간, 그 task로부터 생성되는 데이터량 및 그 생성된 데이터를 이용한 연산을 위해 다른 processor나 컴퓨터로 이동할 때 필요한 전송 시간 등을 고려하여야 한다. 본 논문에서는 대용량 분산 데이터베이스의 처리를 최적화하기 위하여 dataflow 기법을 사용하였으며 그 처리 방법으로 vertically layered allocation scheme을 사용하였다. 이 방법의 기본 개념은 processor간 communication time을 고려하여 각 process들을 재배치하는 것이다. 본 논문은 또한 이 기술의 실현을 위해 각 process의 실행시간과 출력 데이터의 크기 및 그 전송시간을 예상할 수 있는 모델을 제시하였다.

  • PDF