• Title/Summary/Keyword: databases

Search Result 5,206, Processing Time 0.027 seconds

Computer - Aided Korean Wood Identification (COMPUTER를 이용(利用)한 한국산(韓國産) 목재(木材)의 식별(識別)에 관(關)한 연구(硏究))

  • Lee, Won-Yong;Chun, Su-Kyoung
    • Journal of the Korean Wood Science and Technology
    • /
    • v.18 no.2
    • /
    • pp.49-66
    • /
    • 1990
  • In order to identify an unknown wood sample native to Korea. the softwood databases(KSWCHUN; Korean SoftWood CHUN) and the hardwood databases(KHWCHUN; Korean HardWood CHUN) had been built. and the new computer searching programs(IDINEX; IDentification INformation EXpress) has been written in Turbo Pascal(V.5.0) and in Macro Assembly(V.5.0). The characters of the data were based on the 74 features of softwood and on the 148 features of hardwood which are a part of new "IAWA list of microscopic features for hardwood identification" published in 1989. For the purpose of this investigation the wood anatomical nature of 25 species of softwood(13 genera of 5 families) and of 112 species of hardwood(57 genera of 31 families) were observed under a scanning electron microscope and light microscope. and a lot of literature used. The IDINEX programs are based on edge-punched card keys. with several improvements. The maximum number of features in the IDINEX is 229. but that is fixed for a given database. Large numbers of taxa are handled efficiently and new taxa easily added. A search may be based on sequence numbers of features. Comparisons are made sequentially by feature and taxon using the entire suite of features specified to produce the list of possible matching taxa. The results are followings. (1) The databases of Korean wood and the searching programs(IDINEX) had been built. (2) The databases of Korean wood could be an information to search an unknown wood. (3) The databases would be valuable. for the new features, which were not mentioned in Korean wood up to the present. were observed in details. (4) The ultrastructures of the cell walls(warty layer) and crystals observed under a scanning electron microscope will be helpful to search an unknown wood in particular. (5) The searching process is more quick and accurate than the others. 6) We can obtain the information on the differences of a species from the other and search an unknown wood using probability. in IDINEX, (7) The IDINEX will be utilized to identify and classify an animal life, vegetable world, mineral kingdom, and so on.

  • PDF

A Study on the Development and Maintenance of Embedded SQL based Information Systems (임베디드 SQL 기반 정보시스템의 개발 및 관리 방법에 대한 연구)

  • Song, Yong-Uk
    • The Journal of Information Systems
    • /
    • v.19 no.4
    • /
    • pp.25-49
    • /
    • 2010
  • As companies introduced ERP (Enterprise Resource Planning) systems since the middle of 1990s, the databases of the companies has become centralized and gigantic. The companies are now developing data-mining based applications on those centralized and gigantic databases for knowledge management. Almost of them are using $Pro^*C$/C++, a embedded SQL programming language, and it's because the $Pro^*C$/C++ is independent of platforms and also fast. However, they suffer from difficulties in development and maintenance due to the characteristics of corporate databases which have intrinsically large number of tables and fields. The purpose of this research is to design and implement a methodology which makes it easier to develop and maintain embedded SQL applications based on relational databases. Firstly, this article analyzes the syntax of $Pro^*C$/C++ and addresses the concept of repetition and duplication which causes the difficulties in development and maintenance of corporate information systems. Then, this article suggests a management architecture of source codes and databases in which a preprocessor generates $Pro^*C$/C++ source codes by referring a DB table specification, which would solve the problem of repetition and duplication. Moreover, this article also suggests another architecture of DB administration in which the preprocessor generates DB administration commands by referring the same table specification, which would solve the problem of repetition and duplication again. The preprocessor, named $PrePro^*C$, has been developed under the UNIX command-line prompt environment to preprocess $Pro^*C$/C++ source codes and SQL administration commands, and is under update to be used in another DB interface environment like ODBC and JDBC, too.

The National Clinical Database as an Initiative for Quality Improvement in Japan

  • Murakami, Arata;Hirata, Yasutaka;Motomura, Noboru;Miyata, Hiroaki;Iwanaka, Tadashi;Takamoto, Shinichi
    • Journal of Chest Surgery
    • /
    • v.47 no.5
    • /
    • pp.437-443
    • /
    • 2014
  • The JCVSD (Japan Cardiovascular Surgery Database) was organized in 2000 to improve the quality of cardiovascular surgery in Japan. Web-based data harvesting on adult cardiac surgery was started (Japan Adult Cardiovascular Surgery Database, JACVSD) in 2001, and on congenital heart surgery (Japan Congenital Cardiovascular Surgery Database, JCCVSD) in 2008. Both databases grew to become national databases by the end of 2013. This was influenced by the success of the Society for Thoracic Surgeons' National Database, which contains comparable input items. In 2011, the Japanese Board of Cardiovascular Surgery announced that the JACVSD and JCCVSD data are to be used for board certification, which improved the quality of the first paperless and web-based board certification review undertaken in 2013. These changes led to a further step. In 2011, the National Clinical Database (NCD) was organized to investigate the feasibility of clinical databases in other medical fields, especially surgery. In the NCD, the board certification system of the Japan Surgical Society, the basic association of surgery was set as the first level in the hierarchy of specialties, and nine associations and six board certification systems were set at the second level as subspecialties. The NCD grew rapidly, and now covers 95% of total surgical procedures. The participating associations will release or have released risk models, and studies that use 'big data' from these databases have been published. The national databases have contributed to evidence-based medicine, to the accountability of medical professionals, and to quality assessment and quality improvement of surgery in Japan.

2D-THI: Two-Dimensional Type Hierarchy Index for XML Databases (2D-THI: XML 데이테베이스를 위한 이차원 타입상속 계층색인)

  • Lee Jong-Hak
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.3
    • /
    • pp.265-278
    • /
    • 2006
  • This paper presents a two-dimensional type inheritance hierarchy index(2D-THI) for XML databases. XML Schema is one of schema models for the XML documents supporting. The type inheritance. The conventional indexing techniques for XML databases can not support XML queries on type inheritance hierarchies. We construct a two-dimensional index structure using multidimensional file organizations for supporting type inheritance hierarchy in XML queries. This indexing technique deals with the problem of clustering index entries in the two-dimensional domain space that consists of a key element domain and a type identifier domain based on the user query pattern. This index enhances query performance by adjusting the degree of clustering between the two domains. For performance evaluation, we have compared our proposed 2D-THI with the conventional class hierarchy indexing techniques in object-oriented databases such as CH-index and CG-tree through the cost model. As the result of the performance evaluations, we have verified that our proposed two-dimensional type inheritance indexing technique can efficiently support the query Processing in XML databases according to the query types.

  • PDF

Design and Implementation of Real-Time Static Locking Protocol for Main-memory Database Systems (주기억장치 데이타베이스 시스템을 위한 실시간 정적 로킹 기법의 설계 및 구현)

  • Kim, Young-Chul;You, Han-Yang;Kim, Jin-Ho;Kim, June;Seo, Sang-Ku
    • Journal of KIISE:Databases
    • /
    • v.29 no.6
    • /
    • pp.464-476
    • /
    • 2002
  • Main-memory database systems which reside entire databases in main memory are suitable for high-performance real-time transaction processing. If two-phase locking(2PL) as concurrency control protocol is used for the transactions accessing main-memory databases, however, the possibility of lock conflict will be low but lock operations become relatively big overhead in total transaction processing time. In this paper, We designed a real-time static locking(RT-SL) protocol which minimizes lock operation overhead and reflects the priority of transactions and we implemented it on a main-memory real-time database system, Mr.RT. We also evaluate and compare its performance with the existing real-time locking protocols based on 2PL such as 2PL-PI and 2PL-HP. The extensive experiments reveal that our RT-SL outperforms the existing ones in most cases.

Visualization of Path Expressions with Set Attributes and Methods in Graphical Object Query Languages (그래픽 객체 질의어에서 집합 속성과 메소드를 포함한 경로식의 시각화)

  • 조완섭
    • Journal of KIISE:Databases
    • /
    • v.30 no.2
    • /
    • pp.109-124
    • /
    • 2003
  • Although most commercial relational DBMSs Provide a graphical query language for the user friendly interfaces of the databases, few research has been done for graphical query languages in object databases. Expressing complex query conditions in a concise and intuitive way has been an important issue in the design of graphical query languages. Since the object data model and object query languages are more complex than those of the relational ones, the graphical object query language should have a concise and intuitive representation method. We propose a graphical object query language called GOQL (Graphical Object Query Language) for object databases. By employing simple graphical notations, advanced features of the object queries such as path expressions including set attributes, quantifiers, and/or methods can be represented in a simple graphical notation. GOQL has an excellent expressive power compared with previous graphical object query languages. We show that path expressions in XSQL(1,2) can be represented by the simple graphical notations in GOQL. We also propose an algorithm that translates a graphical query in GOQL into the textual object query with the same semantics. We finally describe implementation results of GOQL in the Internet environments.

RDB-based XML Access Control Model with XML Tree Levels (XML 트리 레벨을 고려한 관계형 데이터베이스 기반의 XML 접근 제어 모델)

  • Kim, Jin-Hyung;Jeong, Dong-Won;Baik, Doo-Kwon
    • Journal of Digital Contents Society
    • /
    • v.10 no.1
    • /
    • pp.129-145
    • /
    • 2009
  • As the secure distribution and sharing of information over the World Wide Web becomes increasingly important, the needs for flexible and efficient support of access control systems naturally arise. Since the eXtensible Markup Language (XML) is emerging as the de-facto standard format of the Internet era for storing and exchanging information, there have been recently, many proposals to extend the XML model to incorporate security aspects. To the lesser or greater extent, however, such proposals neglect the fact that the data for XML documents will most likely reside in relational databases, and consequently do not utilize various security models proposed for and implemented in relational databases. In this paper, we take a rather different approach. We explore how to support security models for XML documents by leveraging on techniques developed for relational databases considering object perspective. More specifically, in our approach, (1) Users make XML queries against the given XML view/schema, (2) Access controls for XML data are specified in the relational database, (3) Data are stored in relational databases, (4) Security check and query evaluation are also done in relational databases, and (5) Controlling access control is executed considering XML tree levels

  • PDF

Transformation of Spatial Query Region for Resolving Mismatchs in Distributed Spatial Databases (분산 공간데이타베이스의 위치 불일치 해결을 위한 공간질의영역 변형)

  • 황정래;강혜영;이기준
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.362-372
    • /
    • 2004
  • One of the most difficult problems in building a distributed GIS lies in the heterogeneity of spatial databases. In particular, positional mismatches between spatial databases, which arise due to several reasons, may incur incorrect query results. They result in unreliable outputs of query processing. One simple solution is to correct positional data in spatial databases at each site, according to the most accurate one. This solution is however not practical in cases where the autonomy of each database should be respected. In this paper, we propose a spatial query processing method without correcting positional data in each spatial database. Instead of correcting positional data, we dynamically transform a given query region or position onto each space where spatial objects of each site are located. Our proposed method is based on an elastic transformation method by using delaunay triangulation. Accuracy of this method is proved mathematically, and is confirmed by an experiment. Moreover, we implemented using common use database system for usefulness verification of this method.

An Improved Approach to Identify Bacterial Pathogens to Human in Environmental Metagenome

  • Yang, Jihoon;Howe, Adina;Lee, Jaejin;Yoo, Keunje;Park, Joonhong
    • Journal of Microbiology and Biotechnology
    • /
    • v.30 no.9
    • /
    • pp.1335-1342
    • /
    • 2020
  • The identification of bacterial pathogens to humans is critical for environmental microbial risk assessment. However, current methods for identifying pathogens in environmental samples are limited in their ability to detect highly diverse bacterial communities and accurately differentiate pathogens from commensal bacteria. In the present study, we suggest an improved approach using a combination of identification results obtained from multiple databases, including the multilocus sequence typing (MLST) database, virulence factor database (VFDB), and pathosystems resource integration center (PATRIC) databases to resolve current challenges. By integrating the identification results from multiple databases, potential bacterial pathogens in metagenomes were identified and classified into eight different groups. Based on the distribution of genes in each group, we proposed an equation to calculate the metagenomic pathogen identification index (MPII) of each metagenome based on the weighted abundance of identified sequences in each database. We found that the accuracy of pathogen identification was improved by using combinations of multiple databases compared to that of individual databases. When the approach was applied to environmental metagenomes, metagenomes associated with activated sludge were estimated with higher MPII than other environments (i.e., drinking water, ocean water, ocean sediment, and freshwater sediment). The calculated MPII values were statistically distinguishable among different environments (p < 0.05). These results demonstrate that the suggested approach allows more for more accurate identification of the pathogens associated with metagenomes.

A Study on Introducing and Selecting Online Databases of International Non-Profit Organizations (해외 비영리기관 소장 학술 데이터베이스 현황 조사 및 분석 연구)

  • Hong, Hyun-Jin;Chung, Hye-Kyung;Noh, Young-Hee;Lee Mi-Young
    • Journal of the Korean Society for information Management
    • /
    • v.22 no.1 s.55
    • /
    • pp.87-104
    • /
    • 2005
  • The purpose of this study was to delve into the academic databases of overseas nonprofit organizations, to assess their qualify and to discuss whether or not it's possible to introduce them in the nation and in which way that could be done. And it's also attempted to provide information on the academic databases of nonprofit organizations in nonEnglish-speaking countries in a bid to prepare a wide variety of academic materials about broader fields that would be distinguished from those offered by existing academic databases, since it's not currently possible to take advantage of academic materials possessed by such nations. The efforts by this study was expected to gather international information at a lower cost and in a more efficient way and eventually to contribute to improving the productivity of academic research.