• Title/Summary/Keyword: Database Workload

Search Result 57, Processing Time 0.02 seconds

An Extension of the DBMax for Data Warehouse Performance Administration (데이터 웨어하우스 성능 관리를 위한 DBMax의 확장)

  • Kim, Eun-Ju;Young, Hwan-Seung;Lee, Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.407-416
    • /
    • 2003
  • As the usage of database systems dramatically increases and the amount of data pouring into them is massive, the performance administration techniques for using database systems effectively are getting more important. Especially in data warehouses, the performance management is much more significant mainly because of large volume of data and complex queries. The objectives and characteristics of data warehouses are different from those of other operational systems so adequate techniques for performance monitoring and tuning are needed. In this paper we extend functionalities of the DBMax, a performance administration tool for Oracle database systems, to apply it to data warehouse systems. First we analyze requirements based on summary management and ETL functions they are supported for data warehouse performance improvement in Oracle 9i. Then, we design architecture for extending DBMax functionalities and implement it. In specifics, we support SQL tuning by providing details of schema objects for summary management and ETL processes and statistics information. Also we provide new function that advises useful materialized views on workload extracted from DBMax log files and analyze usage of existing materialized views.

Preliminary Scheduling Based on Historical and Experience Data for Airport Project (초기 기획단계의 실적 및 경험자료 기반 공항사업 기준공기 산정체계)

  • Kang, Seunghee;Jung, Youngsoo;Kim, Sungrae;Lee, Ikhaeng;Lee, Changweon;Jeong, Jinhak
    • Korean Journal of Construction Engineering and Management
    • /
    • v.18 no.6
    • /
    • pp.26-37
    • /
    • 2017
  • Preliminary scheduling at the initial stage of planning phase is usually performed with limited information and details. Therefore, the reliability and accuracy of preliminary scheduling is affected by personal experiences and skills of the schedule planners, and it requires enormous managerial effort (or workload). Reusing of historical data of the similar projects is important for efficient preliminary scheduling. However, understanding the structure of historical data and applying them to a new project requires a great deal of experience and knowledge. In this context, this paper propose a framework and methodology for automated preliminary schedule generation based on historical database. The proposed methodology and framework enables to automatically generate CPM schedules for airport projects in the early planning stage in order to enhance the reliability and to reduce the workload by using structured knowledge and experience.

An Analysis of Nursing Needs for Hospitalized Cancer Patients;Using Data Mining Techniques (데이터 마이닝을 이용한 입원 암 환자 간호 중증도 예측모델 구축)

  • Park, Sun-A
    • Asian Oncology Nursing
    • /
    • v.5 no.1
    • /
    • pp.3-10
    • /
    • 2005
  • Back ground: Nurses now occupy one third of all hospital human resources. Therefore, efficient management of nursing manpower is getting more important. While it is very clear that nursing workload requirement analysis and patient severity classification should be done first for the efficient allocation of nursing workforce, these processes have been conducted manually with ad hoc rule. Purposes: This study was tried to make a predict model for patient classification according to nursing need. We tried to find the easier and faster method to classify nursing patients that can help efficient management of nursing manpower. Methods: The nursing patient classifications data of the hospitalized cancer patients in one of the biggest cancer center in Korea during 2003.1.1-2003.12.31 were assessed by trained nurses. This study developed a prediction model and analyzing nursing needs by data mining techniques. Patients were classified by three different data mining techniques, (Logistic regression, Decision tree and Neural network) and the results were assessed. Results: The data set was created using 165,073 records of 2,228 patients classification database. Main explaining variables were as follows in 3 different data mining techniques. 1) Logistic regression : age, month and section. 2) Decision tree : section, month, age and tumor. 3) Neural network : section, diagnosis, age, sex, metastasis, hospital days and month. Among these three techniques, neural network showed the best prediction power in ROC curve verification. As the result of the patient classification prediction model developed by neural network based on nurse needs, the prediction accuracy was 84.06%. Conclusion: The patient classification prediction model was developed and tested in this study using real patients data. The result can be employed for more accurate calculation of required nursing staff and effective use of labor force.

  • PDF

A group based management method of flash memory for enhancing wear-leveling (Wear-leveling 향상을 위한 플래시 메모리의 그룹단위 관리 방법)

  • Jang, Si-Woong;Kim, Young-Ju;Yu, Yun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.315-320
    • /
    • 2009
  • Since flash memory can not be overwritten on updating data, new data are updated in new area and old data should be invalidated and erased for garbage collection. With develop of flash memory technology, capacity of flash memory is rapidly increasing. It increases rapidly execution time of CPU to search an entire flash memory of large capacity when choosing the block to erase in garbage collection. To solve the problem that is increasing execution time of CPU, flash memory is partitioned into several groups, the block to erase in garbage collection is searched within the corresponding group. In workload of access locality, we enhanced wear-leveling within group by allocating hot data to hot group and cold data to cold group respectively and enhanced wear-leveling among groups by exchanging periodically hot group and cold group.

Relevance Verification of Staff Organizations using System Dynamics (시스템 다이내믹스 기법을 활용한 참모부 조직편성 적절성 검증)

  • Lee, Cheong-Su;Kim, Chang-Hoon
    • Journal of the Korea Society for Simulation
    • /
    • v.27 no.3
    • /
    • pp.53-63
    • /
    • 2018
  • Since warfare surroundings getting complex and diverse in the future, it is not simple to make appropriate structures and organizations for military groups along the phenomenon. Therefore, this study proposes a methodology of verification for army staff's structure and organization by units in the future using System Dynamics(SD). The procedure of using SD for the verification is a calculation of database(DB), the design of causal loop diagram, and the simulation and analysis. First, DB such as individuals' workload and time is calculated through observation after a real group of staff. Second, the causal loop diagram is considered by a flow of task, and it is modeled. Third, the DB is entered into the model and simulated for analyzing of appropriacy. This study used Powersim program for designing the SD model. One of the weaknesses of the methodology of this study is possibilities of a different result by the DB by observers and perspectives by analysts. As supplementation for the weakness, this study includes research analysis and surveys for the total analysis. The meaning of this study is that it suggests a methodology of warfighting experimentation to analyze structure and organization of military groups with quantifying suitability in the scientific method.

A method for improving wear-leveling of flash file systems in workload of access locality (접근 지역성을 가지는 작업부하에서 플래시 파일시스템의 wear-leveling 향상 기법)

  • Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.1
    • /
    • pp.108-114
    • /
    • 2008
  • Since flash memory cannot be overwritten, new data are updated in new area. If data are frequently updated, garbage collection which is achieved by erasing blocks, should be performed to reclaim new area. Hence, because the count of erase operations is limited due to characteristics of flash memory, every block should be evenly written and erased. However, if data with access locality are processed by cost benefit algorithm with separation of hot block ad cold block though the performance of processing is hight wear-leveling is not even. In this paper, we propose CB-MB (Cost Benefit between Multi Bank) algorithm in which hot data are allocated in one bank and cold data in another bank, and in which role of hot bank and cold bank is exchanged every period. CB-MB shows that its performance is 30% better than cost benefit algorithm with separation of cold block and hot block its wear-leveling is about a third of that in standard deviation.

A Study on the Link Server Development Using B-Tree Structure in the Big Data Environment (빅데이터 환경에서의 B-tree 구조 기반 링크정보 관리서버의 개발)

  • Park, Sungbum;Hwang, Jong Sung;Lee, Sangwon
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.75-82
    • /
    • 2015
  • Major corporations and portals have implemented a link server that connects Content Management Systems (CMS) to the physical address of content in a database (DB) to support efficient content use in web-based environments. In particular, a link server automatically connects the physical address of content in a DB to the content URL shown through a web browser screen, and re-connects the URL and the physical address when either is modified. In recent years, the number of users of digital content over the web has increased significantly because of the advent of the Big Data environment, which has also increased the number of link validity checks that should be performed in a CMS and a link server. If the link validity check is performed through an existing URL-based sequential method instead of petabyte or even etabyte environments, the identification rate of dead links decreases because of the degradation of validity check performance; moreover, frequent link checks add a large amount of workload to the DB. Hence, this study is aimed at providing a link server that can recognize URL link deletion or addition through analysis on the B-tree-based Information Identifier count per interval based on a large amount of URLs in order to resolve the existing problems. Through this study, the dead link check that is faster and adds lower loads than the existing method can be performed.