• Title/Summary/Keyword: 데이타

Search Result 4,094, Processing Time 0.028 seconds

Implementation of CORBA based Spatial Data Provider for Interoperability (상호운용을 지원하는 코바 기반 공간 데이터 제공자의 설계 및 구현)

  • Kim, Min-Seok;An, Kyoung-Hwan;Hong, Bong-Hee
    • Journal of Korea Spatial Information System Society
    • /
    • v.1 no.2 s.2
    • /
    • pp.33-46
    • /
    • 1999
  • In distributed computing platforms like CORBA, wrappers are used to integrate heterogeneous systems or databases. A spatial data provider is one of the wrappers because it provides clients with uniform access interfaces to diverse data sources. The individual implementation of spatial data providers for each of different data sources is not efficient because of redundant coding of the wrapper modules. This paper presents a new architecture of the spatial data provider which consists of two layered objects : independent wrapper components and dependent wrapper components. Independent wrapper components would be reused for implementing a new data provider for a new data source, which dependent wrapper components should be newly coded for every data source. This paper furthermore discussed the issues of implementing the representation of query results in the middleware. There are two methods of keeping query results in the middleware. One is to keep query results as non-CORBA objects and the other is to transform query results into CORBA objects. The evaluation of the above two methods shows that the cost of making CORBA objects is very expensive.

  • PDF

Object Views in the ODYSSEUS Object-Relational DBMS (오디세우스 객체관계형 DBMS를 위한 오브젝트 뷰)

  • 이재길;한욱신;이민재;이종학;황규영
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.1
    • /
    • pp.14-24
    • /
    • 2004
  • Views are essential in providing logical data independence for database systems. Object views in object-oriented/object-relational databases have requirements quite different from those of relational databases due to support for object-oriented concepts. Although many commercial object-oriented/object-relational database systems support object views, implementation techniques have not been discussed sufficiently in the literature. In this paper, we devise a technique for implementing views in object-oriented/object-relational databases and apply it to the ODYSSEUS object-relational database system. We first analyze the requirements of object views. Next, to implement object views, we extend the existing query modification algorithm that has been proposed for implementing views in relational databases. Next, we compare the features of the proposed object view with those of object views in commercial object-relational database systems. It is shown that the proposed object view supports all object-oriented concepts such as object identifiers, inheritance, methods, and composite objects, while existing object views support part of them. Last, we propose detailed techniques for implementing the extended query modification algorithm in the ODYSSEUS object-relational database system.

Building a Classifier for Integrated Microarray Datasets through Two-Stage Approach (2 단계 접근법을 통한 통합 마이크로어레이 데이타의 분류기 생성)

  • Yoon, Young-Mi;Lee, Jong-Chan;Park, Sang-Hyun
    • Journal of KIISE:Databases
    • /
    • v.34 no.1
    • /
    • pp.46-58
    • /
    • 2007
  • Since microarray data acquire tens of thousands of gene expression values simultaneously, they could be very useful in identifying the phenotypes of diseases. However, the results of analyzing several microarray datasets which were independently carried out with the same biological objectives, could turn out to be different. One of the main reasons is attributable to the limited number of samples involved in one microarry experiment. In order to increase the classification accuracy, it is desirable to augment the sample size by integrating and maximizing the use of independently-conducted microarray datasets. In this paper, we propose a novel two-stage approach which firstly integrates individual microarray datasets to overcome the problem caused by limited number of samples, and identifies informative genes, secondly builds a classifier using only the informative genes. The classifier from large samples by integrating independent microarray datasets achieves high accuracy up to 24.19% increase as against other comparison methods, sensitivity, and specificity on independent test sample dataset.

A Data-Centric Clustering Algorithm for Reducing Network Traffic in Wireless Sensor Networks (무선 센서 네트워크에서 네트워크 트래픽 감소를 위한 데이타 중심 클러스터링 알고리즘)

  • Yeo, Myung-Ho;Lee, Mi-Sook;Park, Jong-Guk;Lee, Seok-Jae;Yoo, Jae-Soo
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.2
    • /
    • pp.139-148
    • /
    • 2008
  • Many types of sensor data exhibit strong correlation in both space and time. Suppression, both temporal and spatial, provides opportunities for reducing the energy cost of sensor data collection. Unfortunately, existing clustering algorithms are difficult to utilize the spatial or temporal opportunities, because they just organize clusters based on the distribution of sensor nodes or the network topology but not correlation of sensor data. In this paper, we propose a novel clustering algorithm with suppression techniques. To guarantee independent communication among clusters, we allocate multiple channels based on sensor data. Also, we propose a spatio-temporal suppression technique to reduce the network traffic. In order to show the superiority of our clustering algorithm, we compare it with the existing suppression algorithms in terms of the lifetime of the sensor network and the site of data which have been collected in the base-station. As a result, our experimental results show that the size of data was reduced by $4{\sim}40%$, and whole network lifetime was prolonged by $20{\sim}30%$.

TripleDiff: an Incremental Update Algorithm on RDF Documents in Triple Stores (TripleDiff: 트리플 저장소에서 RDF 문서에 대한 점진적 갱신 알고리즘)

  • Lee, Tae-Whi;Kim, Ki-Sung;Yoo, Sang-Won;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.476-485
    • /
    • 2006
  • The Resource Description Framework(RDF), which emerged with the semantic web, is settling down as a standard for representing information about the resources in the World Wide Web Hence, a lot of research on storing and query processing RDF documents has been done and several RDF storage systems, such as Sesame and Jena, have been developed. But the research on updating RDF documents is still insufficient. When a RDF document is changed, data in the RDF triple store also needs to be updated. However, current RDF triple stores don't support incremental update. So updating can be peformed only by deleting the old version and then storing the new document. This updating method is very inefficient because RDF documents are steadily updated. Furthermore, it makes worse when several RDF documents are stored in the same database. In this paper, we propose an incremental update algorithm on RDF, documents in triple stores. We use a text matching technique for two versions of a RDF document and compensate for the text matching result to find the right target triples to be updated. We show that our approach efficiently update RDF documents through experiments with real-life RDF datasets.

Metadata Management Method for Consistency and Recency in Digital Library (디지탈 도서관 환경에서 일관성과 최근성을 고려한 메타데이타 관리 방법)

  • Lee, Hai-Min;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.22-32
    • /
    • 2000
  • The Digital Library is the integrated system of Information Retrieval System(IRS) and Database Management system(DBMS). In the Digital Library environment where dynamic query and update processes are required, however, the existing transaction management methods cause the following problems. First, since the traditional consistency criteria is too restrictive, it causes increment of query processing time and cannot guarantee the reflection of recency. Second, query result could be unreliable because the consistency criteria between source data and metadata is not defined. This paper models the access to metadata based on Dublin Core as query transactions and update transactions, and gives the efficient method to manage them. Particularly, this paper describes the consistency criteria of metadata which takes it Into consideration the consistency between the result of query transaction and status of source data in the Digital Library, that is different from the consistency criteria in traditional transaction management. It also presents analysis of the view point of query transaction to reflect recency and proposes metadata management to guarantee recency within metadata consistency.

  • PDF

Optimistic Concurrency Control for Secure Real-Time Database Systems (실시간 보안 데이타베이스 시스템을 위한 낙관적 동시성 제어 기법)

  • Kim, Dae-Ho;Jeong, Byeong-Soo;Lee, Sung-Young
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.42-52
    • /
    • 2000
  • In many real time applications that the system maintains sensitive information to be shared by multiple users with different security levels, security is another important requirement. A secure real time database system must satisfy not only logical data consistency but also timing constrains and security requirements associated with transactions. Even though an optimistic concurrency control method outperforms locking based method in firm real time database systems, where late transactions are immediately discarded, most existing secure real time concurrency control methods are based on locking. In this paper, we propose a new optimistic concurrency control protocol for secure real time database systems, and compare the performance characteristics of our protocol with locking based method while varying workloads. The result shoes that our proposed O.C.C protocol has good performance in case of many data conflict.

  • PDF

An Effective Data Analysis System for Improving Throughput of Shotgun Proteomic Data based on Machine Learning (대량의 프로테옴 데이타를 효과적으로 해석하기 위한 기계학습 기반 시스템)

  • Na, Seung-Jin;Paek, Eun-Ok
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.889-899
    • /
    • 2007
  • In proteomics, recent advancements In mass spectrometry technology and in protein extraction and separation technology made high-throughput analysis possible. This leads to thousands to hundreds of thousands of MS/MS spectra per single LC-MS/MS experiment. Such a large amount of data creates significant computational challenges and therefore effective data analysis methods that make efficient use of computational resources and, at the same time, provide more peptide identifications are in great need. Here, SIFTER system is designed to avoid inefficient processing of shotgun proteomic data. SIFTER provides software tools that can improve throughput of mass spectrometry-based peptide identification by filtering out poor-quality tandem mass spectra and estimating a Peptide charge state prior to applying analysis algorithms. SIFTER tools characterize and assess spectral features and thus significantly reduce the computation time and false positive rates by localizing spectra that lead to wrong identification prior to full-blown analysis. SIFTER enables fast and in-depth interpretation of tandem mass spectra.

Design and Implementation of a Data Conversion System between SDTS and Informap (SDTS와 Informap간의 데이타 변환 시스템의 설계 및 구현)

  • Oh, Byoung-Woo;Lee, Kang-Jun;Han, Ki-Joon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.2 s.8
    • /
    • pp.109-121
    • /
    • 1996
  • It is very difficult to exchange geographical data among geographical information systems which store their spatial data with independent storage structures. Since, moreover, large amount of storage space is necessary to store spatial data and ex pen sive cost is required to input them, waste will grow as they are stored redundantly. Therefore, it is essential to share them with other geographical information systems by exchanging spatial data among them. In order to exchange spatial data efficiently, there exist several international standards for data exchange format. In this paper, we design and implement a data conversion system that converts geographical data between SDTS (Spatial Data Transfer Standard) which is adopted as the national standard for common data exchange format and Informap which is the existing mapping system. We first analyze the storage structures of SDTS and Informap, respectively and develop gateway functions according to these analyses for efficient conversion. Finally, we design and implement the overall data conversion system between SDTS and Informap using the gateway functions.

  • PDF

A Suggestion of a Spatial Data Model for the National Geographic Institute in Korea (지도제작을 수용하는 GIS 데이타모델에 관한 연구)

  • Kim, Eun-Hyung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.3 no.2 s.6
    • /
    • pp.115-130
    • /
    • 1995
  • The National Geographic Institute(NGI), a national mapping agency, has begun to digitalize national base maps to vitalize nation-wide GIS implementations. However, the NGI's cartographic database design reflects only paper map production and is considered inflexible for various applications. In order to suggest an appropriate data model and database implementation method, approaches of two mapping agencies are analyzed: the United State Geological Survey and Ordnance Survey in the United Kingdom One important finding from the analysis is that each data model is designed to achieve two production purposes in the same time : map and data. By taking advantageous features from the two approaches, an ideal model is proposed. To adapt the ideal model to tile present situation in Korean GIS community, a realistic model is generated, which is an 'SDTS-oriented' data model. Because SDTS will be a Korean data transfer standard, it will be a common basis in developing other data models for different purposes.

  • PDF