• 제목/요약/키워드: data quality

검색결과 20,816건 처리시간 0.039초

위성항법 지상국 감시제어시스템 품질 감시 기법 분석 (Quality Monitoring Method Analysis for GNSS Ground Station Monitoring and Control Subsystem)

  • 정성균;이상욱
    • 한국항공운항학회지
    • /
    • 제18권1호
    • /
    • pp.11-18
    • /
    • 2010
  • GNSS(Global Navigation Satellite System) Ground Station performs GNSS signal acquisition and processing. This system generates error correction information and distributes them to GNSS users. GNSS Ground Station consists of sensor station which contains receiver and meteorological sensor, monitoring and control subsystem which monitors and controls sensor station, control center which generates error correction information, and uplink station which transmits correction information to navigation satellites. Monitoring and control subsystem acquires and processes navigation data from sensor station. The processed data is transmitted to GNSS control center. Monitoring and control subsystem consists of data acquisition module, data formatting and archiving module, data error correction module, navigation determination module, independent quality monitoring module, and system maintenance and management module. The independent quality monitoring module inspects navigation signal, data, and measurement. This paper introduces independent quality monitoring and performs the analysis using measurement data.

DTV 화질향상을 위한 자막데이터 전송방법 (Caption Data Transmission Method for HDTV Picture Quality Improvement)

  • 한찬호
    • 한국멀티미디어학회논문지
    • /
    • 제20권10호
    • /
    • pp.1628-1636
    • /
    • 2017
  • Such as closed caption, ancillary data, electronic program guide(EPG), data broadcasting, and etc, increased data for service convenience cause to degrade video quality of high definition contents. This article propose a method to transfer the closed caption data of video contents without video quality degradation. Video quality degradation does not cause in video compression by the block image insertion of caption data in DTV essential hidden area. Additionally the proposed methods have advantage to synchronize video, audio, and caption from preinserted script without time delay.

유통 상품의 데이터 품질 관리를 위한 데이터 표준화에 대한 연구 (An Empirical Study on Quality Improvement by Data Standardization for Distributed Goods)

  • 송장섭;류성렬
    • 한국컴퓨터정보학회논문지
    • /
    • 제18권9호
    • /
    • pp.101-109
    • /
    • 2013
  • 데이터 품질 관리는 매우 중요하다. 본 연구는 효율적인 기업 데이터의 품질 관리를 위한 데이터 표준화 설계를 유통 상품 사례로 구축 방안을 제시하고 그 효과를 검증 하였다. 데이터 표준화 설계 방안으로 데이터 표준화 체계와 데이터 사전을 설계 하였다. 데이터 표준화 체계 설계를 위하여 데이터를 분류, 속성, 식별하였으며, 데이터 사전 설계를 위하여 데이터 사전 설계 프로세스와 단어 용어 도메인 코드사전을 구축하고, 데이터 표준화 설계 방안을 제시하였다. 제시한 데이터 표준화 방안의 효율성을 정량적, 정성적 방법으로 검증한 결과데이터표준화로 인한 데이터 품질은 24% 및 데이터 사전의 속성 설계인 일관성에 대한 데이터의 구조적 품질은 7% 향상되고, 유효함을 입증하였다.

단일편파 레이더자료 품질관리기술 특성 분석 (Analysis of Quality Control Technique Characteristics on Single Polarization Radar Data)

  • 박소라;김헌애;차주완;박종서;한혜영
    • 대기
    • /
    • 제24권1호
    • /
    • pp.77-87
    • /
    • 2014
  • The radar reflectivity is significantly affected by ground clutter, beam blockage, anomalous propagation (AP), birds, insects, chaff, etc. The quality of radar reflectivity is very important in quantitative precipitation estimation. Therefore, Weather Radar Center (WRC) of Korea Meteorological Administration (KMA) employed two quality control algorithms: 1) Open Radar Product Generator (ORPG) and 2) fuzzy quality control algorithm to improve quality of radar reflectivity. In this study, an occurrence of AP echoes and the performance of both quality control algorithms are investigated. Consequently, AP echoes frequently occur during the spring and fall seasons. Moreover, while the ORPG QC algorithm has the merit of removing non-precipitation echoes, such as AP echoes, it also removes weak rain echoes and snow echoes. In contrast, the fuzzy QC algorithm has the advantage of preserving snow echoes and weak rain echoes, but it eliminates the partial area of the contaminated echo, including the AP echoes.

탐진강 수질측정 지점 간 동질성 검정을 위한 비모수적 자료 분석 (A Non-parametric Analysis of the Tam-Jin River : Data Homogeneity between Monitoring Stations)

  • 김미아;이수웅;이재관;이정섭
    • 한국물환경학회지
    • /
    • 제21권6호
    • /
    • pp.651-658
    • /
    • 2005
  • The Non-parametric Analysis is powerful in data test especially for the non- normality water quality data. The data at three monitoring stations of the Tam-Jin River were evaluated for their normality using Skewness, Q-Q plot and Shapiro-Willks tests. Various constituent of water quality data including temperature, pH, DO, SS, BOD, COD, TN and TP in the period of January 1994 to December 2004 were used as dataset. Shapiro-Willks normality test was carried out for a test 5% significance level. Most water quality data except DO at monitoring stations 1 and 2 showed that data does not normally distributed. It is indicating that non-parametric method must be used for a water quality data. Therefore, a homogeneity was conducted by Mann-Whitney U test (p<0.05). Two stations were paired in three pairs of such stations. Differences between stations 1, 2 and stations 1, 3 for pH, BOD, COD, TN and TP were meaningful, but Tam-Jin 2 and 3 stations did not meaningful. In addition, a narrow gap of the water quality ranges is not a difference. Categories in which all three pairs of stations (1 and 2, 2 and 3, 1 and 3) in the Tam-Jin River showed difference in water quality were analyzed on TN and TP. The results of in this research suggest a right analysis in the homogeneity test of water quality data and a reasonable management of pollutant sources.

A Study on Quality Checking of National Scholar Content DB

  • Kim, Byung-Kyu;Choi, Seon-Hee;Kim, Jay-Hoon;You, Beom-Jong
    • International Journal of Contents
    • /
    • 제6권3호
    • /
    • pp.1-4
    • /
    • 2010
  • The national management and retrieval service of the national scholar Content DB are very important. High quality content can improve the user's utilization and satisfaction and be a strong base for both the citation index creation and the calculation of journal impact factors. Therefore, the system is necessary to check data quality effectively. We have closely studied and developed a webbased data quality checking system that will support anything from raw digital data to its automatic validation as well as hands-on validation, all of which will be discussed in this paper.

A Prototyping Framework of the Documentation Retrieval System for Enhancing Software Development Quality

  • Chang, Wen-Kui;Wang, Tzu-Po
    • International Journal of Quality Innovation
    • /
    • 제2권2호
    • /
    • pp.93-100
    • /
    • 2001
  • This paper illustrates a prototyping framework of the documentation-standards retrieval system via the data mining approach for enhancing software development quality. We first present an approach for designing a retrieval algorithm based on data mining, with the three basic technologies of machine learning, statistics and database management, applied to this system to speed up the searching time and increase the fitness. This approach derives from the observation that data mining can discover unsuspected relationships among elements in large databases. This observation suggests that data mining can be used to elicit new knowledge about the design of a subject system and that it can be applied to large legacy systems for efficiency. Finally, software development quality will be improved at the same time when the project managers retrieving for the documentation standards.

  • PDF

일반화 선형모형을 통한 품질개선실험 자료분석 (Generalized Linear Models for the Analysis of Data from the Quality-Improvement Experiments)

  • 이영조;임용빈
    • 품질경영학회지
    • /
    • 제24권2호
    • /
    • pp.128-141
    • /
    • 1996
  • The advent of the quality-improvement movement caused a great expansion in the use of statistically designed experiments in industry. The regression method is often used for the analysis of data from such experiments. However, the data for a quality characterstic often takes the form of counts or the ratio of counts, e.g. fraction of defectives. For such data the analysis using generalized linear models is preferred to that using the simple regression model. In this paper we introduce the generalized linear model and show how it can be used for the analysis of non-normal data from quality-improvement experiments.

  • PDF

3차원 인체치수 조사 자료의 품질 개선을 위한 연구 (A Study for Quality Improvement of Three-dimensional Body Measurement Data)

  • 박선미;남윤자;박진우
    • 대한인간공학회지
    • /
    • 제28권4호
    • /
    • pp.117-124
    • /
    • 2009
  • To inspect the quality of data collected from a large-scale body measurement and investigation project, it is necessary to establish a proper data editing process. The three-dimensional body measurement may have measuring errors caused from measurer's proficiency or changes in the subject's posture. And it may also have errors caused in the process of algorithm expressing the information obtained from the three-dimensional scanner into numerical values, and in the course of data-processing dealing with numerous data for individuals. When those errors are found, the quality of the measured data is deteriorated, and they consequently reduce the quality of statistics which was conducted on the basis of it. Therefore this study intends to suggest a new way to improve the quality of the data collected from the three-dimensional body measurement by proposing a working procedure identifying data errors and correcting them from the whole data processing procedure-collecting, processing, and analyzing- of the 2004 Size Korea Three-dimensional Body Measurement Project. This study was carried out into three stages: Firstly, we detected erroneous data by examining of logical relations among variables under each edit rule. Secondly, we detected suspicious data through independent examination of individual variable value by sex and age. Finally, we examined scatter-plot matrix of many variables to consider the relationships among them. This simple graphical tool helps us to find out whether some suspicious data exist in the data set or not. As a result of this study, we detected some erroneous data included in the raw data. We figured out that the main errors are not because of the system errors that the three-dimensional body measurement system has but because of the subject's original three-dimensional shape data. Therefore by correcting some erroneous data, we have enhanced data quality.

마이크로 서비스 아키텍처를 지원하는 데이터 프로파일링 소프트웨어의 개발 (Development of Data Profiling Software Supporting a Microservice Architecture)

  • 장재영;김지훈;지서우
    • 한국인터넷방송통신학회논문지
    • /
    • 제21권5호
    • /
    • pp.127-134
    • /
    • 2021
  • 최근 빅데이터 산업의 확대로 고품질의 데이터를 확보하는 것이 중요한 이슈로 떠오르고 있다. 고품질의 데이터를 확보하기 위해서는 데이터에 품질에 대한 정확한 평가가 선행되어야 한다. 데이터의 품질은 데이터에 대한 통계와 같은 메타정보를 통해 평가할 수 있는데 이러한 메타정보를 자동으로 추출하는 기능을 데이터 프로파일링이라고 하다. 지금까지 데이터 프로파일링 소프트웨어는 기존의 데이터 품질 또는 시각화 관련 소프트웨어의 부품이나 추가적인 서비스로 제공되는 것이 일반적이었다. 따라서 프로파일링이 요구되는 다양한 환경에서 직접적으로 사용하기에는 적합하지 않았다. 본 논문에서는 이를 해결하기 위해 마이크로 서비스 아키텍처를 적용하여 다양한 환경에서 서비스가 가능한 데이터 프로파일링 소프트웨어의 개발 결과를 제시한다. 개발된 데이터 프로파일러는 restful API를 통해 데이터의 메타정보에 대한 요청과 응답을 제공하여 사용하기 쉬운 서비스를 제공한다. 또한, 특정 환경에 종속되지 않고 다양한 빅데이터 플랫폼이나 데이터 분석 도구들과 원활한 연계가 가능하다는 장점이 있다.