• Title/Summary/Keyword: data quality management

Search Result 5,898, Processing Time 0.029 seconds

Developing a Web-based System for Computing Pre-Harvest Residue Limits (PHRLs)

  • Chang, Han Sub;Bae, Hey Ree;Son, Young Bae;Song, In Ho;Lee, Cheol Ho;Choi, Nam Geun;Cho, Kyoung Kyu;Lee, Young Gu
    • Agribusiness and Information Management
    • /
    • v.3 no.1
    • /
    • pp.11-22
    • /
    • 2011
  • This study describes the development of a web-based system that collects all data generated in the research conducted to set pre-harvest residue limits (PHRLs) for agricultural product safety control. These data, including concentrations of pesticide residues, limit of detection, limit of quantitation, recoveries, weather charts, and growth rates, are incorporated into a database, a regression analysis of the data is performed using statistical techniques, and the PHRL for an agricultural product is automatically computed. The development and establishment of this system increased the efficiency and improved the reliability of the research in this area by standardizing the data and maintaining its accuracy without temporal or spatial limitations. The system permits automatic computation of the PHRL and a quick review of the goodness of fit of the regression model. By building and analyzing a database, it also allows data accumulated over the last 10 years to be utilized.

  • PDF

Investigation on the Scrum-based Standard Management for Efficient Data Quality Control of Small-sized Companies : A Case Study on Distribution Service of Company 'I' (중소기업의 효율적 데이터 품질관리를 위한 스크럼 기반 표준관리 방안 : 'I'사 물류서비스 적용 사례)

  • Kim, Tai-Yun;Kim, Nam-Gyu;Sohn, Yong-Lak
    • Journal of Information Technology Applications and Management
    • /
    • v.17 no.1
    • /
    • pp.83-105
    • /
    • 2010
  • The competence of enterprise for managing information is evaluated not by the amount of information but by the quality of information such as response time, data consistency, and data correctness. The degradation of data quality is usually caused by the inappropriate process of managing the structure and value of stored data. According to the recent survey on the actual condition of data quality management, the correctness and consistency of data appeared to be the most problematic area among the six criteria of data quality management such as correctness, consistency, availability, timeliness, accessibility, and security. Moreover, the problem was more serious in case of small and medium-sized companies than large enterprises. In this paper, therefore, we attempt to propose a new data quality control methodology for small and medium-sized companies that can improve the correctness and consistency of data without consuming too much time and cost. To adopt the proposed methodology to real application immediately, we provided some scripts for as-is analysis and devised automation tools for managing naming rules of vocabulary, terminology, and data code. Additionally, we performed case study on the distribution service of a small-sized company to estimate the applicability of our tool and methodology.

  • PDF

An Implementation of Total Data Quality Management Using an Information Structure Graph (정보 구조 그래프를 이용한 통합 데이터 품질 관리 방안 연구)

  • 이춘열
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.103-118
    • /
    • 2003
  • This study presents a database quality evaluation framework. As a way to build a framework, this study expands data quality management to include data transformation processes as well as data. Further, an information structure graph is applied to represent data transformations processes. An information structure graph is absed on a relational database scheme. Thus, data transformation processes may be stored in a relational database. This kind of integration of data transformation metadata with technical metadata eases evaluation of database qualities and their causes.

  • PDF

An Efficient Cloud Service Quality Performance Management Method Using a Time Series Framework (시계열 프레임워크를 이용한 효율적인 클라우드서비스 품질·성능 관리 방법)

  • Jung, Hyun Chul;Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.2
    • /
    • pp.121-125
    • /
    • 2021
  • Cloud service has the characteristic that it must be always available and that it must be able to respond immediately to user requests. This study suggests a method for constructing a proactive and autonomous quality and performance management system to meet these characteristics of cloud services. To this end, we identify quantitative measurement factors for cloud service quality and performance management, define a structure for applying a time series framework to cloud service application quality and performance management for proactive management, and then use big data and artificial intelligence for autonomous management. The flow of data processing and the configuration and flow of big data and artificial intelligence platforms were defined to combine intelligent technologies. In addition, the effectiveness was confirmed by applying it to the cloud service quality and performance management system through a case study. Using the methodology presented in this study, it is possible to improve the service management system that has been managed artificially and retrospectively through various convergence. However, since it requires the collection, processing, and processing of various types of data, it also has limitations in that data standardization must be prioritized in each technology and industry.

Impacts of Uncertainty of Water Quality Data on Wate Quality Management (수질자료의 불확실성이 수질관리에 미치는 영향)

  • Kim, Geonha
    • Journal of Korean Society on Water Environment
    • /
    • v.22 no.3
    • /
    • pp.427-430
    • /
    • 2006
  • Uncertainty is one of the key issues of the water quality management. Uncertainty occurs in the course of all water quality management stages including monitoring, modeling, and regulation enforcement. To reduce uncertainties of water quality monitoring, manualized monitoring methodology should be developed and implemented. In addition, long-term monitoring is essential for acquiring reliable water quality data which enables best water quality management. For the water quality management in the watershed scale, fate of pollutant including its generation, transport and impact should be considered while regarding each stage of water quality management as an unit process. Uncertainties of each stage of water quality management should be treated properly to prevent error propagation transferred to the next stage of management for successful achievement of water quality conservation.

A Study on the Influence Factors in Data Quality of Public Organizations (공공기관의 데이터 품질에 영향을 미치는 요인에 관한 연구)

  • Jung, Seung Ho;Jeong, Duke Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.4
    • /
    • pp.251-266
    • /
    • 2013
  • By the progress of informatization, the data which is involved in the administration and public organizations are increased the requestion of the utilization. Nevertheless most of the agencies could not actively participate in sharing and opening the data to the public because of data quality problems. The purpose of this study is to verify the relationship for data quality, managerial and organizational factors which is to derive at the level of the organization's data quality management success factors suggested in previous studies, and the acceptance of the organization's quality management. The result identify that organizational factors, organization's data quality management encouragement and support, give effect data quality through the acceptance of data quality management. However, managerial factors was no effect the data quality management acceptance. This study than managerial approach when considering the quality control for the public organizations, in the early days of the current situation of a company-wide consensus was required, as well as directly to the level of quality factors affecting the quality of acceptance is presented to derive but has significance.

Data Standardization Method for Quality Management of Cloud Computing Services using Artificial Intelligence (인공지능을 활용한 클라우드 컴퓨팅 서비스의 품질 관리를 위한 데이터 정형화 방법)

  • Jung, Hyun Chul;Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.2
    • /
    • pp.133-137
    • /
    • 2022
  • In the smart industry where data plays an important role, cloud computing is being used in a complex and advanced way as a convergence technology because it has and fits well with its strengths. Accordingly, in order to utilize artificial intelligence rather than human beings for quality management of cloud computing services, a consistent standardization method of data collected from various nodes in various areas is required. Therefore, this study analyzed technologies and cases for incorporating artificial intelligence into specific services through previous studies, suggested a plan to use artificial intelligence to comprehensively standardize data in quality management of cloud computing services, and then verified it through case studies. It can also be applied to the artificial intelligence learning model that analyzes the risks arising from the data formalization method presented in this study and predicts the quality risks that are likely to occur. However, there is also a limitation that separate policy development for service quality management needs to be supplemented.

A study on the data quality management evaluation model (데이터 품질관리 평가 모델에 관한 연구)

  • Kim, Hyung-Sub
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.7
    • /
    • pp.217-222
    • /
    • 2020
  • This study is about the data quality management evaluation model. As the information and communication technology is advanced and the importance of storage and management begins to increase, the guam feeling for data is increasing. In particular, interest in the fourth industrial revolution and artificial intelligence has been increasing recently. Data is important in the fourth industrial revolution and the era of artificial intelligence. In the 21st century, data will likely play a role as a new crude oil. It can be said that the management of the quality of this data is very important. However, research is being conducted at a practical level, but research at an academic level is insufficient. Therefore, this study examined factors affecting data quality management for experts and suggested implications. As a result of the analysis, there was a difference in the importance of data quality management.

Data Technology: New Interdisciplinary Science & Technology (데이터 기술: 지식창조를 위한 새로운 융합과학기술)

  • Park, Sung-Hyun
    • Journal of Korean Society for Quality Management
    • /
    • v.38 no.3
    • /
    • pp.294-312
    • /
    • 2010
  • Data Technology (DT) is a new technology which deals with data collection, data analysis, information generation from data, knowledge generation from modelling and future prediction. DT is a newly emerged interdisciplinary science & technology in this 21st century knowledge society. Even though the main body of DT is applied statistics, it also contains management information system (MIS), quality management, process system analysis and so on. Therefore, it is an interdisciplinary science and technology of statistics, management science, industrial engineering, computer science and social science. In this paper, first of all, the definition of DT is given, and then the effects and the basic properties of DT, the differences between IT and DT, the 6 step process for DT application, and a DT example are provided. Finally, the relationship among DT, e-Statistics and Data Mining is explained, and the direction of DT development is proposed.

A Study on the use of Automotive Testing Data for Updating Quality Assurance Models (새로운 품질보증(品質保證)을 위한 자동검사(自動檢査)데이터의 활용(活用)에 관(關)한 연구(硏究))

  • Jo, Jae-Ip
    • Journal of Korean Society for Quality Management
    • /
    • v.11 no.2
    • /
    • pp.25-31
    • /
    • 1983
  • Often arrangement for effective product assessment and audit have not been completely satisfactory. The underlying reasons are: (a) The lack of early evidence of new unit quality. (b) The collection and processing of data. (c) Ineffective data analysis techniques. (d) The variability of information on which decision making is based. Because of the nature of the product the essential outputs from an affective QA organization would be: (a) Confirmation of new unit quality. (b) Detection of failures which are either epidemic or slowly degradatory. (c) Identification of failure cases. (d) Provision of management information at the right time to effect the necessary corrective action. The heart of an effective QA scheme is the acquisition and processing of data. With the advent of data processing for quality monitoring becomes feasible in an automotive testing environment. This paper shows how the method enables us to use Automotive Testing data for the cost benefits of QA management.

  • PDF