• Title/Summary/Keyword: research data quality management

Search Result 2,671, Processing Time 0.028 seconds

Proposal of Process Model for Research Data Quality Management (연구데이터 품질관리를 위한 프로세스 모델 제안)

  • Na-eun Han
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.51-71
    • /
    • 2023
  • This study analyzed the government data quality management model, big data quality management model, and data lifecycle model for research data management, and analyzed the components common to each data quality management model. Those data quality management models are designed and proposed according to the lifecycle or based on the PDCA model according to the characteristics of target data, which is the object that performs quality management. And commonly, the components of planning, collection and construction, operation and utilization, and preservation and disposal are included. Based on this, the study proposed a process model for research data quality management, in particular, the research data quality management to be performed in a series of processes from collecting to servicing on a research data platform that provides services using research data as target data was discussed in the stages of planning, construction and operation, and utilization. This study has significance in providing knowledge based for research data quality management implementation methods.

A Data Quality Management Maturity Model

  • Ryu, Kyung-Seok;Park, Joo-Seok;Park, Jae-Hong
    • ETRI Journal
    • /
    • v.28 no.2
    • /
    • pp.191-204
    • /
    • 2006
  • Many previous studies of data quality have focused on the realization and evaluation of both data value quality and data service quality. These studies revealed that poor data value quality and poor data service quality were caused by poor data structure. In this study we focus on metadata management, namely, data structure quality and introduce the data quality management maturity model as a preferred maturity model. We empirically show that data quality improves as data management matures.

  • PDF

Developing a Web-based System for Computing Pre-Harvest Residue Limits (PHRLs)

  • Chang, Han Sub;Bae, Hey Ree;Son, Young Bae;Song, In Ho;Lee, Cheol Ho;Choi, Nam Geun;Cho, Kyoung Kyu;Lee, Young Gu
    • Agribusiness and Information Management
    • /
    • v.3 no.1
    • /
    • pp.11-22
    • /
    • 2011
  • This study describes the development of a web-based system that collects all data generated in the research conducted to set pre-harvest residue limits (PHRLs) for agricultural product safety control. These data, including concentrations of pesticide residues, limit of detection, limit of quantitation, recoveries, weather charts, and growth rates, are incorporated into a database, a regression analysis of the data is performed using statistical techniques, and the PHRL for an agricultural product is automatically computed. The development and establishment of this system increased the efficiency and improved the reliability of the research in this area by standardizing the data and maintaining its accuracy without temporal or spatial limitations. The system permits automatic computation of the PHRL and a quick review of the goodness of fit of the regression model. By building and analyzing a database, it also allows data accumulated over the last 10 years to be utilized.

  • PDF

Pattern Analysis of Nonconforming Farmers in Residual Pesticides using Exploratory Data Analysis and Association Rule Analysis (탐색적 자료 분석 및 연관규칙 분석을 활용한 잔류농약 부적합 농업인 유형 분석)

  • Kim, Sangung;Park, Eunsoo;Cho, Hyunjeong;Hong, Sunghie;Sohn, Byungchul;Hong, Jeehwa
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.1
    • /
    • pp.81-95
    • /
    • 2021
  • Purpose: The purpose of this study was to analysis pattern of nonconforming farmers who is one of the factors of unconformity in residual pesticides. Methods: Pattern analysis of nonconforming farmers were analyzed through convergence of safety data and farmer's DB data. Exploratory data analysis and association rule analysis were used for extracting factors related to unconformity. Results: The results of this study are as follows; regarding the exploratory data analysis, it was found that factors of farmers influencing unconformity in residual pesticides by total 9 factors; sampling time, gender, age, cultivation region, farming career, agricultural start form, type of agriculture, cultivation area, classification of agricultural products. Regarding the association rule analysis, non-conformity association rules were found over the past three years. There was a difference in the pattern of nonconforming farmers depending on the cultivation period. Conclusion: Exploratory data analysis and association rule analysis will be useful tools to establish more efficient and economical safety management plan for agricultural products.

Data Technology: New Interdisciplinary Science & Technology (데이터 기술: 지식창조를 위한 새로운 융합과학기술)

  • Park, Sung-Hyun
    • Journal of Korean Society for Quality Management
    • /
    • v.38 no.3
    • /
    • pp.294-312
    • /
    • 2010
  • Data Technology (DT) is a new technology which deals with data collection, data analysis, information generation from data, knowledge generation from modelling and future prediction. DT is a newly emerged interdisciplinary science & technology in this 21st century knowledge society. Even though the main body of DT is applied statistics, it also contains management information system (MIS), quality management, process system analysis and so on. Therefore, it is an interdisciplinary science and technology of statistics, management science, industrial engineering, computer science and social science. In this paper, first of all, the definition of DT is given, and then the effects and the basic properties of DT, the differences between IT and DT, the 6 step process for DT application, and a DT example are provided. Finally, the relationship among DT, e-Statistics and Data Mining is explained, and the direction of DT development is proposed.

Component Development and Importance Weight Analysis of Data Governance (Data Governance 구성요소 개발과 중요도 분석)

  • Jang, Kyoung-Ae;Kim, Woo-Je
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.41 no.3
    • /
    • pp.45-58
    • /
    • 2016
  • Data are important in an organization because they are used in making decisions and obtaining insights. Furthermore, given the increasing importance of data in modern society, data governance should be requested to increase an organization's competitive power. However, data governance concepts have caused confusion because of the myriad of guidelines proposed by related institutions and researchers. In this study, we re-established the concept of ambiguous data governance and derived the top-level components by analyzing previous research. This study identified the components of data governance and quantitatively analyzed the relation between these components by using DEMATEL and context analysis techniques that are often used to solve complex problems. Three higher components (data compliance management, data quality management, and data organization management) and 13 lower components are derived as data governance components. Furthermore, importance analysis shows that data quality management, data compliance management, and data organization management are the top components of data governance in order of priority. This study can be used as a basis for presenting standards or establishing concepts of data governance.

A Study on the Suggestions for Standard Flow Conditions considering the Variation of Stream Flow and Water Quality for the Management of Total Maximum Daily Loads (하천 유량.수질변화 특성을 고려한 수질오염총량관리 기준유량 조건에 관한 연구)

  • Park, Jun Dae;Oh, Seung Young;Choi, Yun Ho
    • Journal of Korean Society on Water Environment
    • /
    • v.28 no.3
    • /
    • pp.426-435
    • /
    • 2012
  • The variation of stream flow is the one of the most important factors which influence on that of water quality in the unit watershed. The target water quality goal is established and permissible load is allotted in the base of the standard flow condition along with its water quality for the management of Total Maximum Daily Loads (TMDLs). A standard flow selected could cause problems in the load allotment if it was not properly arranged. This study reviewed the acquisition of water quality data, the self-variation and the retainability in water quality on the specific flow conditions. This study also proposed the median and the adjusted average flow condition out of general flow conditions as alternative standard flow conditions. It is considered that the alternatives can make the water quality data easily acquired and the water quality representativeness more enhanced on the standard flow conditions.

Quality Design Support System based on Data Mining Approach (데이터 마이닝 기반의 품질설계지원시스템)

  • 지원철
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.28 no.3
    • /
    • pp.31-47
    • /
    • 2003
  • Quality design in practice highly depends on human designer's intuition and past experiences due to lack of formal knowledge about the relationship among 10 variables. This paper represents an data mining approach for developing quality design support system that integrates Case Based Reasoning (CBR) and Artificial Neural Networks (ANN) to effectively support all the steps in quality design process. CBR stores design cases in a systematic way and retrieve them quickly and accurately. ANN predicts the resulting quality attributes of design alternatives that are generated from CBR's adaptation process. When the predicted attributes fail to meet the target values, quality design simulation starts to further adapt the alternatives to the customer's new orders. To implement the quality design simulation, this paper suggests (1) the data screening method based on ξ-$\delta$ Ball to obtain the robust ANN models from the large production data bases, (2) the procedure of quality design simulation using ANN and (3) model management system that helps users find the appropriate one from the ANN model base. The integration of CBR and ANN provides quality design engineers the way that produces consistent and reliable design solutions in the remarkably reduced time.

Feasibility to Expand Complex Wards for Efficient Hospital Management and Quality Improvement

  • CHOI, Eun-Mee;JUNG, Yong-Sik;KWON, Lee-Seung;KO, Sang-Kyun;LEE, Jae-Young;KIM, Myeong-Jong
    • The Journal of Industrial Distribution & Business
    • /
    • v.11 no.12
    • /
    • pp.7-15
    • /
    • 2020
  • Purpose: This study aims to explore the feasibility of expanding complex wards to provide efficient hospital management and high-quality medical services to local residents of Gangneung Medical Center (GMC). Research Design, Data and Methodology: There are four research designs to achieve the research objectives. We analyzed Big Data for 3 months on Social Network Services (SNS). A questionnaire survey conducted on 219 patients visiting the GMC. Surveys of 20 employees of the GMC applied. The feasibility to expand the GMC ward measured through Focus Group Interview by 12 internal and external experts. Data analysis methods derived from various surveys applied with data mining technique, frequency analysis, and Importance-Performance Analysis methods, and IBM SPSS statistical package program applied for data processing. Results: In the result of the big data analysis, the GMC's recognition on SNS is high. 95.9% of the residents and 100.0% of the employees required the need for the complex ward extension. In the analysis of expert opinion, in the future functions of GMC, specialized care (△3.3) and public medicine (△1.4) increased significantly. Conclusion: GMC's complex ward extension is an urgent and indispensable project to provide efficient hospital management and service quality.

A study on the data quality management evaluation model (데이터 품질관리 평가 모델에 관한 연구)

  • Kim, Hyung-Sub
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.7
    • /
    • pp.217-222
    • /
    • 2020
  • This study is about the data quality management evaluation model. As the information and communication technology is advanced and the importance of storage and management begins to increase, the guam feeling for data is increasing. In particular, interest in the fourth industrial revolution and artificial intelligence has been increasing recently. Data is important in the fourth industrial revolution and the era of artificial intelligence. In the 21st century, data will likely play a role as a new crude oil. It can be said that the management of the quality of this data is very important. However, research is being conducted at a practical level, but research at an academic level is insufficient. Therefore, this study examined factors affecting data quality management for experts and suggested implications. As a result of the analysis, there was a difference in the importance of data quality management.