• Title/Summary/Keyword: SQL-database

Search Result 431, Processing Time 0.023 seconds

An Extension of the DBMax for Data Warehouse Performance Administration (데이터 웨어하우스 성능 관리를 위한 DBMax의 확장)

  • Kim, Eun-Ju;Young, Hwan-Seung;Lee, Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.407-416
    • /
    • 2003
  • As the usage of database systems dramatically increases and the amount of data pouring into them is massive, the performance administration techniques for using database systems effectively are getting more important. Especially in data warehouses, the performance management is much more significant mainly because of large volume of data and complex queries. The objectives and characteristics of data warehouses are different from those of other operational systems so adequate techniques for performance monitoring and tuning are needed. In this paper we extend functionalities of the DBMax, a performance administration tool for Oracle database systems, to apply it to data warehouse systems. First we analyze requirements based on summary management and ETL functions they are supported for data warehouse performance improvement in Oracle 9i. Then, we design architecture for extending DBMax functionalities and implement it. In specifics, we support SQL tuning by providing details of schema objects for summary management and ETL processes and statistics information. Also we provide new function that advises useful materialized views on workload extracted from DBMax log files and analyze usage of existing materialized views.

Testing Transactions based on Verification of Isolation Levels (고립화 수준을 검증하기 위한 트랜잭션의 시험)

  • Hong, Seok-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.7
    • /
    • pp.75-84
    • /
    • 2008
  • Concurrency and synchronization problems are often caused by database applications concurrently accessing databases managed by DBMS. Most commercial DBMSs support isolation levels to resolve these problems. Verification of isolation levels are most important because consistency and integrity constraints of the database can be violated according to isolation levels of transactions that consists of database applications. We propose a test tool set to verify and reveal faulty settings of isolation levels and implement a prototype of the test tool set. The proposed tool set analyzes the SQL statements of ESQL/C programs, attaches the test codes to verify isolation levels, runs the test transactions and detects errors.

XML Database Setup & Messenger Connection Research for Cyber University Supporting (가상대학 학사지원을 위한 XML 데이터베이스 구축과 메신저 관련 연구)

  • Bang Kee-Chun
    • Journal of Digital Contents Society
    • /
    • v.4 no.1
    • /
    • pp.115-126
    • /
    • 2003
  • In this paper, an objective of this research is to add convenience to the intranet operating within an educational institution, by setting up a messenger function to enable the faculties and teachers to make better use of the university database via Intranet. With this messenger services, the users can view the information about the members in a group they belong to, and register as friends a group of members they selected from the work group they are in. This function will reduce the cumbersome chores of adding one individual as a friend at a time. This research will enhance the Intranet within an educational institution by integrating new functions. Speciallly, In this paper, it is stored student data by the input form by the JDBC of XML standard data. Also, it is able to search of stored database to web sever from web client by the SQL quary.

  • PDF

Design and Implementation of School Affairs Management System using PHP on the Internet (인터넷 상에서 PHP를 이용한 학사관리 시스템의 설계 및 구현)

  • Moon, Jin-Yong;Koo, Yong-Wan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3148-3154
    • /
    • 2000
  • In this paper. the design and implementationof the on-line registration system for the school affairs is described. The environments for the system configurations include a PC server under Linux Iperating System. Apache Web-server, and MySQL as database engine. In addition, PHP, which becomes a popular Internet server-based script language lately, is used to implement a real-time database. In order to avoid overload problems during short-term registration period, which deconstraces the typical surge of traffics, the proposed system is designed to minimize the unnecessary interfacing tasks. On administrator side task, the sytem is designed to have environments by separating the dechcated server that restricts the scope of specific database thasks. In doing so, it become possibal to build an optical system by distributing, balancing the transaction load, maintainimg the security and efficient administrative tasks.

  • PDF

HBase based Business Process Event Log Schema Design of Hadoop Framework

  • Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.49-55
    • /
    • 2019
  • Organizations design and operate business process models to achieve their goals efficiently and systematically. With the advancement of IT technology, the number of items that computer systems can participate in and the process becomes huge and complicated. This phenomenon created a more complex and subdivide flow of business process.The process instances that contain workcase and events are larger and have more data. This is an essential resource for process mining and is used directly in model discovery, analysis, and improvement of processes. This event log is getting bigger and broader, which leads to problems such as capacity management and I / O load in management of existing row level program or management through a relational database. In this paper, as the event log becomes big data, we have found the problem of management limit based on the existing original file or relational database. Design and apply schemes to archive and analyze large event logs through Hadoop, an open source distributed file system, and HBase, a NoSQL database system.

System Implementation and Analysis of Job Analysis for University Curriculum (교육과정체계 수립을 위한 직무분석 시스템 구현 및 적용사레 분석)

  • Hyun, Seung-Ryul;Lee, Sang-Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.127-134
    • /
    • 2009
  • Universities or job training institutes need to develop training courses that match up with requirements of enterprises. Therefore, job analysis system that analyzes skills which are needed for job performing and makes it possible to be reflected in the curriculums is requested for deriving generalized curriculum system. In this paper, to develop a curriculum that fulfills the requirements of enterprises, we implemented a series of DACUM(Developing A CurriculUM) based process that performs verification of tasks for workers and roadmap for curriculum, etc. And, we analyzed a instance of this application system. The proposed system is implemented by Java and MS Access database. The system is possible to work with central database realized by MS SQL Server through the Internet.

Construction of Linkage Database on Nursing Diagnoses, Interventions, Outcomes in Abdominal Surgery Patients (복부수술환자의 간호진단, 간호중재, 간호결과 연계 데이터베이스 구축)

  • Yoo, Hyung-Sook;Chi, Sung-Ai
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.7 no.3
    • /
    • pp.425-437
    • /
    • 2001
  • This reserch was to develop database software in order to handle a lot of clinical nursing data with nursing diagnoses, related factors, defining characteristics, nursing interventions, nursing activities and nursing outcomes. MS Access2000 and SQL was selected to use a general purpose database logic with an efficiency. MS Visual Basic 6.0 was used to construct the circumstance of Graphic User Interface. The Linkage Database of abdominal surgery patients was constructed from the clinical data and questionnaire. This database system could add related factors, defining characteristics, nursing activities in the database and analyze the statistical results through Access query. In the final stage, end-users satisfaction analysis using 5 points Likert scale was dong with the response of using the database system. The accuracy/trustworthiness of the database system was verified with the highest average scores as 4.42 and also, the efficiency as 4.21, user friendly function as 4.1.

  • PDF

NVST DATA ARCHIVING SYSTEM BASED ON FASTBIT NOSQL DATABASE

  • Liu, Ying-Bo;Wang, Feng;Ji, Kai-Fan;Deng, Hui;Dai, Wei;Liang, Bo
    • Journal of The Korean Astronomical Society
    • /
    • v.47 no.3
    • /
    • pp.115-122
    • /
    • 2014
  • The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our slestudy brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

Representation and Implementation of Graph Algorithms based on Relational Database (관계형 데이타베이스에 기반한 그래프 알고리즘의 표현과 구현)

  • Park, Hyu-Chan
    • Journal of KIISE:Databases
    • /
    • v.29 no.5
    • /
    • pp.347-357
    • /
    • 2002
  • Graphs have provided a powerful methodology to solve a lot of real-world problems, and therefore there have been many proposals on the graph representations and algorithms. But, because most of them considered only memory-based graphs, there are still difficulties to apply them to large-scale problems. To cope with the difficulties, this paper proposes a graph representation and graph algorithms based on the well-developed relational database theory. Graphs are represented in the form of relations which can be visualized as relational tables. Each vertex and edge of a graph is represented as a tuple in the tables. Graph algorithms are also defined in terms of relational algebraic operations such as projection, selection, and join. They can be implemented with the database language such as SQL. We also developed a library of basic graph operations for the management of graphs and the development of graph applications. This database approach provides an efficient methodology to deal with very large- scale graphs, and the graph library supports the development of graph applications. Furthermore, it has many advantages such as the concurrent graph sharing among users by virtue of the capability of database.

Evaluating the Performance Quality of Open Source Database Management Systems (오픈소스 DBMS의 성능 품질 평가)

  • Min, Meekyung
    • Journal of Korean Society for Quality Management
    • /
    • v.45 no.4
    • /
    • pp.933-942
    • /
    • 2017
  • Purpose: The purpose of this paper is to evaluate the performance quality of the open source DBMSs. Performance quality is defined as processing time for Join queries. Query processing time is measured and compared in the most widely used open source DBMSs and commercial DBMS. Methods: By varying the number of tuples of two relations to be joined, the average processing time(seconds) of a Join query in each DBMS was obtained experimentally. ANOVA and Tukey HSD test were used in order to compare the performance quality of DBMSs. Results: There was a significant difference between the performance qualities of the three DBMSs at all experimental levels where the number of tuples was 100, 1,000, 2,000, 10,000, and 50,000. As a result of the Tukey HSD test, two open source DBMSs (MariaDB, MySQL) were classified in the same group only at the tuple level of 100. The commercial DBMS (MS-SQL Server) belonged to another group. At level of more than 1,000 tuples, all three DBMSs belonged to different groups. Conclusion: Within the open source DBMS group, MariaDB showed the better performance quality except for a small number of tuples. Thus the results show that MariaDB can be the alternative to MySQL which is currently most widely used. Between open source DBMS and commercial DBMS groups, MS-SQL Server always shows the best performance quality, but the less number of tuples, the less the difference.