• Title/Summary/Keyword: database systems

Search Result 2,853, Processing Time 0.031 seconds

A Study on the Implementation of SQL Primitives for Decision Tree Classification (판단 트리 분류를 위한 SQL 기초 기능의 구현에 관한 연구)

  • An, Hyoung Geun;Koh, Jae Jin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.12
    • /
    • pp.855-864
    • /
    • 2013
  • Decision tree classification is one of the important problems in data mining fields and data minings have been important tasks in the fields of large database technologies. Therefore the coupling efforts of data mining systems and database systems have led the developments of database primitives supporting data mining functions such as decision tree classification. These primitives consist of the special database operations which support the SQL implementation of decision tree classification algorithms. These primitives have become the consisting modules of database systems for the implementations of the specific algorithms. There are two aspects in the developments of database primitives which support the data mining functions. The first is the identification of database common primitives which support data mining functions by analysis. The other is the provision of the extended mechanism for the implementations of these primitives as an interface of database systems. In data mining, some primitives want be stored in DBMS is one of the difficult problems. In this paper, to solve of the problem, we describe the database primitives which construct and apply the optimized decision tree classifiers. Then we identify the useful operations for various classification algorithms and discuss the implementations of these primitives on the commercial DBMS. We implement these primitives on the commercial DBMS and present experimental results demonstrating the performance comparisons.

A Database System for High-Throughput Transposon Display Analyses of Rice

  • Inoue, Etsuko;Yoshihiro, Takuya;Kawaji, Hideya;Horibata, Akira;Nakagawa, Masaru
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.15-20
    • /
    • 2005
  • We developed a database system to enable efficient and high-throughput transposon analyses in rice. We grow large-scale mutant series of rice by taking advantage of an active MITE transposon mPing, and apply the transposon display method to them to study correlation between genotypes and phenotypes. But the analytical phase, in which we find mutation spots from waveform data called fragment profiles, involves several problems from a viewpoint of labor amount, data management, and reliability of the result. As a solution, our database system manages all the analytical data throughout the experiments, and provides several functions and well designed web interfaces to perform overall analyses reliably and efficiently.

  • PDF

Building A PDM/CE Environment and Validating Integrity Using STEP (STEP을 이용한 PDM/CE환경의 구축과 데이타 무결성 확인)

  • 유상봉;서효원;고굉욱
    • The Journal of Society for e-Business Studies
    • /
    • v.1 no.1
    • /
    • pp.173-194
    • /
    • 1996
  • In order to adapt today's short product life cycle and rapid technology changes., integrated systems should be extended to support PDM (Product Data Management) or CE(Concurrent Engineering). A PDM/CE environment has been developed and a prototype is Presented in this paper. Features of the PDM/CE environment are 1) integrated product information model (IPIM) includes both data model and integrity constraints, 2) database systems are organized hierarchically so that working data C8Mot be referenced by other application systems until they are released into the global database, and 3) integrity constraints written in EXPRESS are validated both in the local databases and the global database. By keeping the integrity of the product data, undesirable propagation of illegal data to other application system can be prevented. For efficient validation, the constraints are distributed into the local and the global schemata. Separate triggering mechanisms are devised using the dependency of constraints to three different data operations, i.e., insertion, deletion, and update.

  • PDF

Optimal Savepoint in a Loosely-Coupled Resilient Database System (느슨히 결합된 데이타베이스 시스템에서 최적의 저장점 유도)

  • Choe, Jae-Hwa;Kim, Seong-Eon
    • Asia pacific journal of information systems
    • /
    • v.6 no.1
    • /
    • pp.21-38
    • /
    • 1996
  • This paper investigates the performance improvement opportunities through a resiliency mechanism in the distributed primary/backup database system. Recognizing that a distributed transaction executes at several servers during its lifetime, we propose a resiliency mechanism which allows continuous transaction processing in distributed database server systems in the presence of a server failure. In order to perform continuous transaction processing despite failures, every state change of a transaction processing can be saved in the backup server. Obviously, this pessimistic synchronization may give more burdens than benefits to the system. Thus, the tracking needs not be done synchronously with the transaction progress. Instead, the state of all transaction processing in a system is saved periodically. This activity is referred to as a savepoint. Then, the question is how often the savepoint has to be done. We derive the optimal savepoint to identify the optimization parameters for the resilient transaction processing system.

  • PDF

Automation of Expert Classification in Knowledge Management Systems Using Text Categorization Technique (문서 범주화를 이용한 지식관리시스템에서의 전문가 분류 자동화)

  • Yang, Kun-Woo;Huh, Soon-Young
    • Asia pacific journal of information systems
    • /
    • v.14 no.2
    • /
    • pp.115-130
    • /
    • 2004
  • This paper proposes how to build an expert profile database in KMS, which provides the information of expertise that each expert possesses in the organization. To manage tacit knowledge in a knowledge management system, recent researches in this field have shown that it is more applicable in many ways to provide expert search mechanisms in KMS to pinpoint experts in the organizations with searched expertise so that users can contact them for help. In this paper, we develop a framework to automate expert classification using a text categorization technique called Vector Space Model, through which an expert database composed of all the compiled profile information is built. This approach minimizes the maintenance cost of manual expert profiling while eliminating the possibility of incorrectness and obsolescence resulted from subjective manual processing. Also, we define the structure of expertise so that we can implement the expert classification framework to build an expert database in KMS. The developed prototype system, "Knowledge Portal for Researchers in Science and Technology," is introduced to show the applicability of the proposed framework.

A Study on the Method to Establish an User Environment of a Requirements Management Database Using Web Access of Cradle(R) (Cradle(R)의 Web Access를 이용한 철도시스템 사양관리환경 구축방안 연구)

  • Chung, Kyung-Ryul;Park, Chul-Ho;Song, Seon-Ho;Hur, Jee-Youl
    • Proceedings of the KSR Conference
    • /
    • 2009.05a
    • /
    • pp.132-140
    • /
    • 2009
  • The Cradle(R) is specialized systems engineering tool that developed by 3SL(Structure Software System Limited) that has headquarters in U.K. and U.S.A.. It is recognized as the fastest growing Computer-Aided Systems Engineering Tool(CASE Tool). We built up requirements management database by using Cradle(R) on urban maglev program in Korea. So Cradle(R) provide a network function, is available accesses of clients in external organization that is associated on urban maglev program by the internet. However the network function of Cradle(R) require an opening of some a specific network ports and may causes a decreasing network speed. In this paper, we propose the method to establish an user environment of a requirements management database that overcome constraints on network condition.

  • PDF

External vs. Internal: An Essay on Machine Learning Agents for Autonomous Database Management Systems

  • Fatima Khalil Aljwari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.164-168
    • /
    • 2023
  • There are many possible ways to configure database management systems (DBMSs) have challenging to manage and set.The problem increased in large-scale deployments with thousands or millions of individual DBMS that each have their setting requirements. Recent research has explored using machine learning-based (ML) agents to overcome this problem's automated tuning of DBMSs. These agents extract performance metrics and behavioral information from the DBMS and then train models with this data to select tuning actions that they predict will have the most benefit. This paper discusses two engineering approaches for integrating ML agents in a DBMS. The first is to build an external tuning controller that treats the DBMS as a black box. The second is to incorporate the ML agents natively in the DBMS's architecture.

Utilizing Integrated Public Big Data in the Database System for Analyzing Vehicle Accidents

  • Lee, Gun-woo;Kim, Tae-ho;Do, Songi;Jun, Hyun-jin;Moon, Yoo-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.9
    • /
    • pp.99-105
    • /
    • 2017
  • In this paper, we propose to design and implement the database management system for analyzing vehicle accidents through utilizing integration of the public big data. And the paper aims to provide valuable information for recognizing seriousness of the vehicle accidents and various circumstances at the accident time, and to utilize the produced information for the insurance company policies as well as government policies. For analysis of the vehicle accidents the system utilizes the integrated big data of National Indicator System, the Meteorological Office, National Statistical Office, Korea Insurance Development Institute, Road Traffic Authority, Ministry of Land, Infrastructure and Transport as well as the National Police Agency, which differentiates this system from the previous systems. The system consists of data at the accident time including weather conditions, vehicle models, age, sex, insurance amount etc., by which the database system users are able to obtain the integral information about vehicle accidents. The result shows that the vehicle accidents occur more frequently in the clear weather conditions, in the vehicle to vehicle conditions and in crosswalk & crossway. Also, it shows that the accidents in the cloudy weather leads more seriously to injury and death than in the clear weather. As well, the vehicle accident information produced by the system can be utilized to effectively prevent drivers from dangerous accidents.

One-Snapshot Algorithm for Secure Transaction Management in Electronic Stock Trading Systems (전자 주식 매매 시스템에서의 보안 트랜잭션 관리를 위한 단일 스냅샷 알고리즘)

  • 김남규;문송천;손용락
    • Journal of KIISE:Databases
    • /
    • v.30 no.2
    • /
    • pp.209-224
    • /
    • 2003
  • Recent development of electronic commerce enables the use of Electronic Stock Trading Systems(ESTS) to be expanded. In ESTS, information with various sensitivity levels is shared by multiple users with mutually different clearance levels. Therefore, it is necessary to use Multilevel Secure Database Management Systems(MLS/DBMSs) in controlling concurrent execution among multiple transactions. In ESTS, not only analytical OLAP transactions, but also mission critical OLTP transactions are executed concurrently, which causes it difficult to adapt traditional secure transaction management schemes to ESTS environments. In this paper, we propose Secure One Snapshot(SOS) protocol that is devised for Secure Transaction Management in ESTS. By maintaining additional one snapshot as well as working database SOS blocks covert-channel efficiently, enables various real-time transaction management schemes to be adapted with ease, and reduces the length of waiting queue being managed to maintain freshness of data by utilizing the characteristics of less strict correctness criteria. In this paper, we introduce the process of SOS protocol with some examples, and then analyze correctness of devised protocol.

Intelligent Query Processing Using a Meta-Database KaDB

  • Huh, Soon-Young;Moon, Kae-Hyun
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.161-171
    • /
    • 1999
  • Query language has been widely used as a convenient tool to obtain information from a database. However, users demand more intelligent query processing systems that can understand the intent of an imprecise query and provide additional useful information as well as exact answers. This paper introduces a meta-database and presents a query processing mechanism that supports a variety of intelligent queries in a consistent and integrated way. The meta-database extracts data abstraction knowledge from an underlying database on the basis of a multilevel knowledge representation framework KAH. In cooperation with the underlying database, the meta-database supports four types of intelligent queries that provide approximately or conceptually equal answers as well as exact ones.

  • PDF