• Title/Summary/Keyword: Data Quality Framework

Search Result 537, Processing Time 0.023 seconds

An Evaluation Study on Artificial Intelligence Data Validation Methods and Open-source Frameworks (인공지능 데이터 품질검증 기술 및 오픈소스 프레임워크 분석 연구)

  • Yun, Changhee;Shin, Hokyung;Choo, Seung-Yeon;Kim, Jaeil
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1403-1413
    • /
    • 2021
  • In this paper, we investigate automated data validation techniques for artificial intelligence training, and also disclose open-source frameworks, such as Google's TensorFlow Data Validation (TFDV), that support automated data validation in the AI model development process. We also introduce an experimental study using public data sets to demonstrate the effectiveness of the open-source data validation framework. In particular, we presents experimental results of the data validation functions for schema testing and discuss the limitations of the current open-source frameworks for semantic data. Last, we introduce the latest studies for the semantic data validation using machine learning techniques.

F_MixBERT: Sentiment Analysis Model using Focal Loss for Imbalanced E-commerce Reviews

  • Fengqian Pang;Xi Chen;Letong Li;Xin Xu;Zhiqiang Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.263-283
    • /
    • 2024
  • Users' comments after online shopping are critical to product reputation and business improvement. These comments, sometimes known as e-commerce reviews, influence other customers' purchasing decisions. To confront large amounts of e-commerce reviews, automatic analysis based on machine learning and deep learning draws more and more attention. A core task therein is sentiment analysis. However, the e-commerce reviews exhibit the following characteristics: (1) inconsistency between comment content and the star rating; (2) a large number of unlabeled data, i.e., comments without a star rating, and (3) the data imbalance caused by the sparse negative comments. This paper employs Bidirectional Encoder Representation from Transformers (BERT), one of the best natural language processing models, as the base model. According to the above data characteristics, we propose the F_MixBERT framework, to more effectively use inconsistently low-quality and unlabeled data and resolve the problem of data imbalance. In the framework, the proposed MixBERT incorporates the MixMatch approach into BERT's high-dimensional vectors to train the unlabeled and low-quality data with generated pseudo labels. Meanwhile, data imbalance is resolved by Focal loss, which penalizes the contribution of large-scale data and easily-identifiable data to total loss. Comparative experiments demonstrate that the proposed framework outperforms BERT and MixBERT for sentiment analysis of e-commerce comments.

Antecedents of Online Shopping Success: A Reexamination and Extension

  • Kang, Young Sik;Kim, Jeoungkun;Min, Jinyoung
    • Asia pacific journal of information systems
    • /
    • v.26 no.3
    • /
    • pp.393-426
    • /
    • 2016
  • The qualities of the technological artifact of online shopping websites and the overall support delivered by the service provider through the website are generally agreed to be crucial elements in creating customer satisfaction and loyalty. However, a lack of consensus exists on how those qualities are related to each other, what they consist of, and how they can be conceptualized. Based on relevant literature and using a servicescape framework as a theoretical lens, we divide online shopping website qualities into information and system qualities and argue that both factors affect service quality. We conceptualize each of the three types of quality as a second-order formative construct comprising its most salient quality dimensions: information quality consisting of reliability, understandability, currency, and relevance; system quality consisting of usability, availability, and responsiveness; and service quality consisting of efficiency and fulfillment. Our model of how information, system, and service qualities are related to one another and to customer satisfaction and loyalty is then tested empirically with a data set of 570 online shopping customers. Our integrated model reconciles the seemingly contradictory conceptualizations of previous researchers and provides an effective way to create customer satisfaction and loyalty.

A Study on the Dimension of Quality Metrics for Information Systems Development and Success : An Application of Information Processing Theory

  • An, Joon M.
    • The Journal of Information Technology and Database
    • /
    • v.3 no.2
    • /
    • pp.97-118
    • /
    • 1996
  • Information systems quality engineering is one of the most problematic areas in practice and research, and needs cooperative efforts between practice and theory [Glass, 1996]. A model for evaluating the quality of system development process and ensuing success is proposed based on information processing theory of project unit design. A nomological net among a set of quality variables is identified from prior research in the areas of organization science, software engineering, and management information systems. More specifically, system development success was modelled as a function of project complexity, system development modelling environment, user participation, project unit structure, resource availability, and the level of iterative nature of development methodology. Based on the model developed from the information processing theory of project unit design in organization science. appropriate quality metrics for each variable in the proposed model are matched. In this way, a framework of relevant systems development and success quality metrics for controlling systems development processes and ensuing success is proposed. The causal relationships among the constructs in the proposed model are proposed as future empirical research for academicians and as managerial tools for quality managers. The framework and propositions help quality manager to select more parsimonious quality metrics for controlling information systems development processes and project success in an integrated way. Also this model can be utilized for evaluating software quality assurance programmes, which are developed and marketed by many vendors.

  • PDF

A Case Study on the Target Sampling Inspection for Improving Outgoing Quality (타겟 샘플링 검사를 통한 출하품질 향상에 관한 사례 연구)

  • Kim, Junse;Lee, Changki;Kim, Kyungnam;Kim, Changwoo;Song, Hyemi;Ahn, Seoungsu;Oh, Jaewon;Jo, Hyunsang;Han, Sangseop
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.3
    • /
    • pp.421-431
    • /
    • 2021
  • Purpose: For improving outgoing quality, this study presents a novel sampling framework based on predictive analytics. Methods: The proposed framework is composed of three steps. The first step is the variable selection. The knowledge-based and data-driven approaches are employed to select important variables. The second step is the model learning. In this step, we consider the supervised classification methods, the anomaly detection methods, and the rule-based methods. The applying model is the third step. This step includes the all processes to be enabled on real-time prediction. Each prediction model classifies a product as a target sample or random sample. Thereafter intensive quality inspections are executed on the specified target samples. Results: The inspection data of three Samsung products (mobile, TV, refrigerator) are used to check functional defects in the product by utilizing the proposed method. The results demonstrate that using target sampling is more effective and efficient than random sampling. Conclusion: The results of this paper show that the proposed method can efficiently detect products that have the possibilities of user's defect in the lot. Additionally our study can guide practitioners on how to easily detect defective products using stratified sampling

Evidence-based approaches for establishing the 2015 Dietary Reference Intakes for Koreans

  • Shin, Sangah;Kim, Subeen;Joung, Hyojee
    • Nutrition Research and Practice
    • /
    • v.12 no.6
    • /
    • pp.459-468
    • /
    • 2018
  • BACKGROUND/OBJECTIVES: The Dietary Reference Intakes for Koreans (KDRIs), a set of reference intake values, have served as a basis for guiding a balanced diet that promotes health and prevents disease in the general Korean population. In the process of developing DRIs, a systematic review has played an important role in helping the DRI committees make evidence-based and transparent decisions for updating the next DRIs. Thus, the 2015 KDRI steering committee applied the systematic review framework to the revision process of the KDRIs. The purpose of this article is to summarize the revision process for the 2015 KDRIs by focusing on the systematic review framework. MATERIALS/METHODS: The methods used to develop the systematic review framework for 2015 KDRIs followed the Agency for Healthcare Research and Quality and the Tufts Evidence-based Practice Center. The framework for systematic review of the 2015 KDRIs comprised of the 3 following steps: (1) development of an analytic framework and refinement of key questions and search terms; (2) literature search and data extraction; and, (3) appraisal of the literature and summarizing the results. RESULTS: A total of 203,237 studies were retrieved through the above procedure, with 2,324 of these studies included in the analysis. General information, main results, comments of reviewers, and results of quality assessment were extracted and organized by study design. The average points of quality appraisals were 3.0 (range, 0-5) points for intervention, 6.1 (0-9) points for cohort, 6.0 (3-9) points for nested case-control, 5.4 (1-8) points for case-control, 14.6 (0-22) points for cross-sectional studies, and 7.0 (0-11) points for reviews. CONCLUSIONS: Systematic review helped to establish the 2015 KDRIs as a useful tool for evidence-based approach. Collaborative efforts to improve the framework for systematic review should be continued for future KDRIs.

An Integrated Framework for Data Quality Management of Traffic Data Warehouses (고품질 데이터를 지원하는 교통데이터 웨어하우스 구축 기법)

  • Hwang, Jae-Il;Park, Seung-Yong;Nah, Yun-Mook
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.4
    • /
    • pp.89-95
    • /
    • 2008
  • In this paper, we propose an integrated techniques for managing data quality in traffic data warehousing environments. We describe how to collect and construct the traffic data warehouses from the operational databases, such as FTMS and ARTIS. We explain how to configure the traffic data warehouses efficiently. Also, we propose a quality management techniques to provide high quality traffic data for various analytical transactions. Proposed techniques can contribute in providing high quality traffic data to the traffic related users and researcher, thus reducing data preprocessing and evaluation cost.

  • PDF

Framework of Online Shopping Service based on M2M and IoT for Handheld Devices in Cloud Computing (클라우드 컴퓨팅에서 Handheld Devices 기반의 M2M 및 IoT 온라인 쇼핑 서비스 프레임워크)

  • Alsaffar, Aymen Abdullah;Aazam, Mohammad;Park, Jun-Young;Huh, Eui-Nam
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.179-182
    • /
    • 2013
  • We develop Framework architecture of Online Shopping Services based on M2M and IoT for Handheld Devices in Cloud Computing. MapReduce model will be used as a method to simplify large scale data processing when user search for purchasing products online which provide efficient, and fast respond time. Therefore, providing user with a enhanced Quality of Experience (QoE) as well as Quality of Service (QoS) when purchasing/searching products Online from big data.

ASSESSMENT OF PUBLIC PERCEIVED ROADWAY SMOOTHNESS

  • Jamie Miller;Don Chen;Neil Mastin
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.507-508
    • /
    • 2013
  • International Roughness Index (IRI) has been widely used by state DOTs to quantify pavement smoothness. When pavement condition falls below certain IRI thresholds, corresponding pavement maintenance treatments should be considered for application. Selection of appropriate IRI thresholds is essential to tactical allocation of limited resources to improve the conditions of states' roadway systems. This selection process is often challenging, however, because IRI thresholds are largely determined by Perceived Ride Quality (PRQ), and PRQ differs in each state. In this paper, a framework is proposed to address this problem. Passenger raters will be randomly selected from predetermined geographic locations, and their PRQ ratings collected. Taking this perceived ride data, along with other data collected, a statistical analysis will be conducted to establish the relationship between measured IRI values and PRQ. Appropriate IRI thresholds will then be determined. Once this framework is implemented, state DOTs could make informative maintenance decisions, which are expected to greatly enhance the public perception of pavement conditions in today's challenging economy.

  • PDF

Denoising Diffusion Null-space Model and Colorization based Image Compression

  • Indra Imanuel;Dae-Ki Kang;Suk-Ho Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.22-30
    • /
    • 2024
  • Image compression-decompression methods have become increasingly crucial in modern times, facilitating the transfer of high-quality images while minimizing file size and internet traffic. Historically, early image compression relied on rudimentary codecs, aiming to compress and decompress data with minimal loss of image quality. Recently, a novel compression framework leveraging colorization techniques has emerged. These methods, originally developed for infusing grayscale images with color, have found application in image compression, leading to colorization-based coding. Within this framework, the encoder plays a crucial role in automatically extracting representative pixels-referred to as color seeds-and transmitting them to the decoder. The decoder, utilizing colorization methods, reconstructs color information for the remaining pixels based on the transmitted data. In this paper, we propose a novel approach to image compression, wherein we decompose the compression task into grayscale image compression and colorization tasks. Unlike conventional colorization-based coding, our method focuses on the colorization process rather than the extraction of color seeds. Moreover, we employ the Denoising Diffusion Null-Space Model (DDNM) for colorization, ensuring high-quality color restoration and contributing to superior compression rates. Experimental results demonstrate that our method achieves higher-quality decompressed images compared to standard JPEG and JPEG2000 compression schemes, particularly in high compression rate scenarios.