• 제목/요약/키워드: Multiple Database

검색결과 705건 처리시간 0.022초

객체지향 데이터베이스에서 다계층 데이터베이스 설계 및 유지 (A Multiple Layered Database Design and Maintenance in Object-Oriented Databases)

  • 김남진;신동천
    • 한국정보처리학회논문지
    • /
    • 제5권1호
    • /
    • pp.11-23
    • /
    • 1998
  • 오늘날 같이 대량의 데이터를 갖는 데이터베이스에서 원하는 정보를 찾는 작업은 많은 비용과 시간을 요구하게 된다. 따라서 대용량의 데이터를 수용하고 있는 데이터베이스에서 원하는 정보를 효과적으로 찾기 위한 기술이 필요하다. 지식 추출 도구 중에서 AOG(attribute-oriented generalization)기법을 기반으로 하는 다계층 데이터베이스(multiple layered database)는 다양한 상황에서 효과적으로 지식을 추출할 수 있는 매우 유용한 방법이다. 본 논문에서는 AOG 기법을 이용하여 객체지향 데이터 모델에서 다계층 데이터베이스를 설계하는 방법론을 제안한다. 또한 구축된 다계층 데이터베이스에서 효과적인 정보 제공의 지속성을 유지하기 위한 방법으로 동적 스키마 변화 모델과 구현 전략을 제시한다.

  • PDF

퍼지 다중특성 관계 그래프를 이용한 내용기반 영상검색 (Content-based Image Retrieval Using Fuzzy Multiple Attribute Relational Graph)

  • 정성환
    • 정보처리학회논문지B
    • /
    • 제8B권5호
    • /
    • pp.533-538
    • /
    • 2001
  • 본 연구에선는 FAGA(Fuzzy Attribute Relational Graph) 노드의 단일특성을 실제 영상을 응용하여 다중특성으로 확장하고, 노드의 레이블뿐만 아니라, 칼라 질감 그리고 공간관계를 고려한 다중특성 관계 그래프를 이용한 새로운 영상검색을 제안하였다. 1,240 개의 영상으로 구성된 합성영상 데이터베이스와 NETRA 및 Corel Drew 의 1,026개의 영상으로 구성된 자연영상 데이터베이스를 사용하여 실험한 결과, 다중특성을 고려한 접근방법이 단일 특성만 고려하는 방법에 비하여, 합성영상의 경우 Recall에서 6~30% 성능 증가를 보였고, 자연연상의 경우에도 Displacement 척도들과 유사 검색 영상의 수에서 검색 성능이 우수함을 실험을 통하여 확인하였다.

  • PDF

PDM을 위한 하이브리드 데이터베이스 통합 모델에 관한 연구 (A Study on Hybrid Database Integration Model for Product Data Management)

  • 이강찬;이상;유정연;이규철
    • 한국전자거래학회지
    • /
    • 제3권1호
    • /
    • pp.23-41
    • /
    • 1998
  • In a centralized database system, all system components reside at a single platform. In recent years there has been a rapid trend toward the integration of information systems over multiple sites that are interconnected via a communication network, and users' needs are changed to integration of multiple information sites. Multi database System is one of solutions for integrating distributed heterogeneous databases. However the problems in multi database system are restriction in distributed environment support, limitation in integrating heterogeneous media type data, static integration, and data-only of integration. In order to solve these problems, we propose a hybrid database integration model, HyDIM. HyDIM is used for the integrating legacy multimedia data, adopting CORBA, MDS, and mediator. We demonstrate a prototype system far PDM application domain.

  • PDF

Methodology for Extended Schema Representation in Database Integration

  • 김철호
    • 한국국방경영분석학회지
    • /
    • 제23권2호
    • /
    • pp.85-102
    • /
    • 1997
  • There have been several research efforts to support interoperability among multiple databases. In integrating multiple databases, we must resolve schema conflicts due to the heterogeneity in databases. To resolve these conflicts, not only meta-data for database schemas but also general knowledge expressing the real world meanings associated with the database schemas are required. This paper presents a uniform representation method for relational schema and general knowledge base that is composed, among other things, of concept hierarchy and thematic roles in relationship, using the knowledge representation language Lk. This representation method has a flexible descriptive power which facilitates concepts to be expressed at different levels of granularity and can describe knowledge expressed in Lk are used for input of the next step, such as conflict resolution and query processing of multiple database.

  • PDF

APPLICATION AND CROSS-VALIDATION OF SPATIAL LOGISTIC MULTIPLE REGRESSION FOR LANDSLIDE SUSCEPTIBILITY ANALYSIS

  • LEE SARO
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.302-305
    • /
    • 2004
  • The aim of this study is to apply and crossvalidate a spatial logistic multiple-regression model at Boun, Korea, using a Geographic Information System (GIS). Landslide locations in the Boun area were identified by interpretation of aerial photographs and field surveys. Maps of the topography, soil type, forest cover, geology, and land-use were constructed from a spatial database. The factors that influence landslide occurrence, such as slope, aspect, and curvature of topography, were calculated from the topographic database. Texture, material, drainage, and effective soil thickness were extracted from the soil database, and type, diameter, and density of forest were extracted from the forest database. Lithology was extracted from the geological database and land-use was classified from the Landsat TM image satellite image. Landslide susceptibility was analyzed using landslide-occurrence factors by logistic multiple-regression methods. For validation and cross-validation, the result of the analysis was applied both to the study area, Boun, and another area, Youngin, Korea. The validation and cross-validation results showed satisfactory agreement between the susceptibility map and the existing data with respect to landslide locations. The GIS was used to analyze the vast amount of data efficiently, and statistical programs were used to maintain specificity and accuracy.

  • PDF

Prediction of Mammalian MicroRNA Targets - Comparative Genomics Approach with Longer 3' UTR Databases

  • Nam, Seungyoon;Kim, Young-Kook;Kim, Pora;Kim, V. Narry;Shin, Seokmin;Lee, Sanghyuk
    • Genomics & Informatics
    • /
    • 제3권3호
    • /
    • pp.53-62
    • /
    • 2005
  • MicroRNAs play an important role in regulating gene expression, but their target identification is a difficult task due to their short length and imperfect complementarity. Burge and coworkers developed a program called TargetScan that allowed imperfect complementarity and established a procedure favoring targets with multiple binding sites conserved in multiple organisms. We improved their algorithm in two major aspects - (i) using well-defined UTR (untranslated region) database, (ii) examining the extent of conservation inside the 3' UTR specifically. Average length in our UTR database, based on the ECgene annotation, is more than twice longer than the Ensembl. Then, TargetScan was used to identify putative binding sites. The extent of conservation varies significantly inside the 3' UTR. We used the 'tight' tracks in the UCSC genome browser to select the conserved binding sites in multiple species. By combining the longer 3' UTR data, TargetScan, and tightly conserved blocks of genomic DNA, we identified 107 putative target genes with multiple binding sites conserved in multiple species, of which 85 putative targets are novel.

컨텐츠 통합 검색을 위한 질의어 처리 시스템 구현 (An Implementation of a Query Processing System for an Integrated Contents Database Retrieval)

  • 김영균;이명철;이미영;김명준
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2003년도 춘계종합학술대회논문집
    • /
    • pp.356-360
    • /
    • 2003
  • 다양한 종류의 컨텐츠 데이터베이스를 구축하고, 이를 인터넷 서비스로 제공하는 인터넷 포탈 서비스 응용이나 전자상거래 등에서 기존에 구축되어 있는 여러 형태의 컨텐츠 데이터베이스들을 통합하여 새로운 컨텐츠 서비스를 제공하기 위한 많은 노력들이 수행되고 있다. 이는 사용자가 활용하려는 컨텐츠가 어떤 데이터베이스 또는 어느 인터넷 서비스 응용에서 제공되는지를 파악해야 하는 사용자의 부담을 줄이고, 다수의 컨텐츠 데이터베이스들을 사용자에게 단일한 뷰(view)를 제공하므로서 사용자 이용 편의성을 높일 수 있다. 본 논문에서는 다양한 컨텐츠 데이터베이스들이 인터넷에 분산되어 있을 뿐만 아니라 서로 상이한 데이터베이스 시스템들(즉, 관계형 DB/객체형 DB)에서 관리되는 환경에서 XML자료 모델을 기반으로 통합된 하나의 가상 데이터베이스를 구축하고 검색하는 통합 검색 시스템의 핵심 요소인 질의 처리 시스템을 설계 및 구현한다.

  • PDF

Use of Graph Database for the Integration of Heterogeneous Biological Data

  • Yoon, Byoung-Ha;Kim, Seon-Kyu;Kim, Seon-Young
    • Genomics & Informatics
    • /
    • 제15권1호
    • /
    • pp.19-27
    • /
    • 2017
  • Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

그래프이론에 의한 데이터베이스 세그먼트 분산 알고리즘 (Database Segment Distributing Algorithm using Graph Theory)

  • 김중수
    • 한국멀티미디어학회논문지
    • /
    • 제22권2호
    • /
    • pp.225-230
    • /
    • 2019
  • There are several methods which efficiencies of database are uprise. One of the well-known methods is that segments of database satisfying a query was rapidly accessed and processed. So if it is possible to search completely parallel multiple database segment types which satisfy a query, the response time of the query will be reduced. The matter of obtaining CPS(Completely Parallel Searchable) distribution without redundancy can be viewed as graph theoretic problem, and the operation of ring sum on the graph is used for CPS. In this paper, the parallel algorithm is proposed.

Contribution to Improve Database Classification Algorithms for Multi-Database Mining

  • Miloudi, Salim;Rahal, Sid Ahmed;Khiat, Salim
    • Journal of Information Processing Systems
    • /
    • 제14권3호
    • /
    • pp.709-726
    • /
    • 2018
  • Database classification is an important preprocessing step for the multi-database mining (MDM). In fact, when a multi-branch company needs to explore its distributed data for decision making, it is imperative to classify these multiple databases into similar clusters before analyzing the data. To search for the best classification of a set of n databases, existing algorithms generate from 1 to ($n^2-n$)/2 candidate classifications. Although each candidate classification is included in the next one (i.e., clusters in the current classification are subsets of clusters in the next classification), existing algorithms generate each classification independently, that is, without taking into account the use of clusters from the previous classification. Consequently, existing algorithms are time consuming, especially when the number of candidate classifications increases. To overcome the latter problem, we propose in this paper an efficient approach that represents the problem of classifying the multiple databases as a problem of identifying the connected components of an undirected weighted graph. Theoretical analysis and experiments on public databases confirm the efficiency of our algorithm against existing works and that it overcomes the problem of increase in the execution time.