• Title/Summary/Keyword: databases

Search Result 5,206, Processing Time 0.026 seconds

An experiment to enhance subject access in korean online public access catalog (온라인 열람목록의 주제탐색 강화를 위한 실험적 연구)

  • 장혜란;홍지윤
    • Journal of Korean Library and Information Science Society
    • /
    • v.25
    • /
    • pp.83-107
    • /
    • 1996
  • The purpose of this study is to experiment online public access catalog enhancements to improve its subject access capability. Three catalog databases, enhanced with title keywords, controlled vocabulary, and content words with controlled vocabulary respectively, were implemented. 18 searchers performed 2 subject searshes against 3 different catalog databases. And the transaction logs are analyzed. The results of the study can be summarized as follows : Controlled vocabulary catalog database achieved 41.8% recall ratio in average ; the addition of table of contents words to the controlled vocabulary is an effective technique with increasing recall ration upto 55% without decreasing precision ; and the database enhanced with title keywords shows 31.7% recall ratio in average. Of the three kinds of catalog databases, only the catalog with contents words produced 2 unique relevant documents. The results indicate that both user training and system development is required to have better search performance in online public access catalog.

  • PDF

MPI: A Practical Index Scheme for XML Data in Object Databases

  • Song Ha-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.729-734
    • /
    • 2005
  • In order to access XML data stored in object databases, an efficient index scheme is inevitable. There have been several index schemes that can be used to efficiently retrieve XML data stored In object databases, but they are all the single path indexes that support indexing along a single schema path. Henee, if a query contains an extended path which is denoted by wild character ('*'), a query processor has to examine multiple index objects, resulting in poor performance and inconsistent index management. In this paper, we propose MPI (Multi-Path Index) scheme as a new index scheme that provides the functionality of multiple path indexes more efficiently, while it uses only one index structure. The proposed scheme is easy to manage since it considers the extended path as a logically single schema path. It is also practical since it can be implemented by little modification of the B -tree index structure.

  • PDF

Maintaining Integrity Constraints in Spatiotemporal Databases

  • Moon Kyung Do;Woo SungKu;Kim ByungCheol;Ryu KeunHo
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.726-729
    • /
    • 2004
  • Spatiotemporal phenomena are ubiquitous aspects of real world. In the spatial and temporal databases, integrity constraints maintain the semantics of specific application domain and relationship between domains when proceed update in the database. Efficient maintenance of data integrity has become a critical problem, since testing the validity of a large number of constraints in a large database and after each transaction is an expensive task. Especially, in spatiotemporal domain, data is more complex than traditional domains and very active. Additionally, it is not considered that unified frameworks deal with both spatial and temporal properties to handle integrity constraints. Therefore, there need a model to maintain integrity constraints in the unified frameworks and enforcement and management techniques in order to preserve consistence.

  • PDF

Design and implementation of a Moving Object Engine

  • Lee Hyun Ah;Kim Jin Suk
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.272-275
    • /
    • 2004
  • Recently, the services using position information of moving objects is embossed. Theses services needs the moving objects databases to manage moving object data with efficiency. To build the moving object databases, we must develop the moving object engine to mange, store, and search the spatio temporal data of moving object. The moving object engine has to support query syntax to search data that suitable for user need like LBS, Telematics, ITS, vehicle management system. In this paper, we design and implement the moving object engine to support service with moving object data. The moving object engine is able to support system environment that users are able to get the moving object data easily even they don't know complex data structure.

  • PDF

ULTRAVIOLET AND VISIBLE SPECTROSCOPIC DATABASE FOR ATOMS AND MOLECULES IN CELESTIAL OBJETS

  • Kim, Sang-J.
    • Publications of The Korean Astronomical Society
    • /
    • v.9 no.1
    • /
    • pp.111-166
    • /
    • 1994
  • I have developed a UV and visible spectroscopic database (UVSD) for atoms and molecules, which are found in interstellar medium, stars, galaxies, and in the atmospheres of the earth, planets, satellites, and comets. This UV and visible database, which is machine-readable, consists of three different sub-databases depending upon the characteristics of the sub-databases: (A) atomic and molecular line listings from laboratory observations or theoretical studies; (B) absorption spectra measured in laboratories; and (C) solar UV, visible, and infrared spectral atlases. The UVSD is in a very initial stage of development compared with other well organized and established infrared and microwave databases. In order to make a good quality and complete database, substantial efforts should be made for the acquisition of scattered important data from laboratories or institutions, and then the acquired heterogeneous data should be peer-reviewed and standardized.

  • PDF

Currents in Integrative Biochip Informatics

  • Kim, Ju-Han
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2001.10a
    • /
    • pp.1-9
    • /
    • 2001
  • scale genomic and postgenomic data means that many of the challenges in biomedical research are now challenges in computational sciences and information technology. The informatics revolutions both in clinical informatics and bioinformatics will change the current paradigm of biomedical sciences and practice of clinical medicine, including diagnostics, therapeutics, and prognostics. Postgenome informatics, powered by high throughput technologies and genomic-scale databases, is likely to transform our biomedical understanding forever much the same way that biochemistry did a generation ago. In this talk, 1 will describe how these technologies will in pact biomedical research and clinical care, emphasizing recent advances in biochip-based functional genomics. Basic data preprocessing with normalization and filtering, primary pattern analysis, and machine teaming algorithms will be presented. Issues of integrated biochip informatics technologies including multivariate data projection, gene-metabolic pathway mapping, automated biomolecular annotation, text mining of factual and literature databases, and integrated management of biomolecular databases will be discussed. Each step will be given with real examples from ongoing research activities in the context of clinical relevance. Issues of linking molecular genotype and clinical phenotype information will be discussed.

  • PDF

A Formal Presentation of the Extensional Object Model (외연적 객체모델의 정형화)

  • Jeong, Cheol-Yong
    • Asia pacific journal of information systems
    • /
    • v.5 no.2
    • /
    • pp.143-176
    • /
    • 1995
  • We present an overview of the Extensional Object Model (ExOM) and describe in detail the learning and classification components which integrate concepts from machine learning and object-oriented databases. The ExOM emphasizes flexibility in information acquisition, learning, and classification which are useful to support tasks such as diagnosis, planning, design, and database mining. As a vehicle to integrate machine learning and databases, the ExOM supports a broad range of learning and classification methods and integrates the learning and classification components with traditional database functions. To ensure the integrity of ExOM databases, a subsumption testing rule is developed that encompasses categories defined by type expressions as well as concept definitions generated by machine learning algorithms. A prototype of the learning and classification components of the ExOM is implemented in Smalltalk/V Windows.

  • PDF

A Study on Quality Evaluation & Improvement of CD-ROM Databases (CD-ROM 데이터베이스의 품질평가 및 개선방안에 관한 연구)

  • Lee Eung-Bong
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.33 no.4
    • /
    • pp.29-46
    • /
    • 1999
  • This study aims to evaluate the quality of CD-ROM databases: Korean MARC on Disc and Korean National Bibliographies on CD-ROM. The five criteria(accuracy, completeness, consistency on the quality of data itself and ease of use, variety of retrieval function on the quality of database services) are used to measure their quality. The purpose of this study was to diagnose the quality of two CD-ROM databases mentioned above and to analyses the measured result of quality comparatively and to find out the problems of them and also to provide possible suggestions for their improvements.

  • PDF

Scaling Network Information Services to Support HetNets and Dynamic Spectrum Access

  • Piri, Esa;Schulzrinne, Henning
    • Journal of Communications and Networks
    • /
    • v.16 no.2
    • /
    • pp.202-208
    • /
    • 2014
  • Wireless network information services allow end systems to discover heterogeneous networks and spectrum available for secondary use at or near their current location, helping them to cope with increasing traffic and finite spectrum resources. We propose a unified architecture that allows end systems to find nearby base stations that are using either licensed, shared or unlicensed spectrum across multiple network operators. Our study evaluates the performance and scalability of spatial databases storing base station coverage area geometries. The measurement results indicate that the current spatial databases perform well even when the number of coverage areas is very large. A single logical spatial database would likely be able to satisfy the query load for a large national cellular network. We also observe that coarse geographic divisions can significantly improve query performance.

Data Integration for DW Construction

  • Yongmoo Suh;Jung, Chul-Yong
    • The Journal of Information Technology and Database
    • /
    • v.4 no.2
    • /
    • pp.79-95
    • /
    • 1998
  • Useful data being distributed over several systems, we have a problem in accessing and utilizing them. Recognizing this problem, researchers have proposed two concepts as solutions to the problem, multidatabase and data warehouse. The one provides a virtual view over the distributed data, and the latter is a materialized view of it. Recently, more attention has been paid to the latter, which is a single of distributed database, collected along a time dimension. So, the major issues in building a data warehouse are 1) how to define a global schema for the data warehouse, 2) how to capture changes from local databases, and 3) how to represent time-varying values of data item. This paper presents an integrated approach to these issues, borrowing the research results from such areas as multidatabase, active databases and temporal databases.