• Title/Summary/Keyword: Quantitative Data

Search Result 5,120, Processing Time 0.043 seconds

Randomized Response Model with Discrete Quantitative Attribute by Three-Stage Cluster Sampling

  • Lee, Gi-Sung;Hong, Ki-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.1067-1082
    • /
    • 2003
  • In this paper, we propose a randomized response model with discrete quantitative attribute by three-stage cluster sampling for obtaining discrete quantitative data by using the Liu & Chow model(1976), when the population was made up of sensitive discrete quantitative clusters. We obtain the minimum variance by calculating the optimum number of fsu, ssu, tsu under the some given constant cost. And we obtain the minimum cost under the some given accuracy.

  • PDF

A Quantitative Assessment Model for Data Governance (Data Governance 정량평가 모델 개발방법의 제안)

  • Jang, Kyoung-Ae;Kim, Woo-Je
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.42 no.1
    • /
    • pp.53-63
    • /
    • 2017
  • Managing the quantitative measurement of the data control activities in enterprise wide is important to secure management of data governance. However, research on data governance is limited to concept definitions and components, and data governance research on evaluation models is lacking. In this study, we developed a model of quantitative assessment for data governance including the assessment area, evaluation index and evaluation matrix. We also, proposed a method of developing the model of quantitative assessment for data governance. For this purpose, we used previous studies and expert opinion analysis such as the Delphi technique, KJ method in this paper. This study contributes to literature by developing a quantitative evaluation model for data governance at the early stage of the study. This paper can be used for the base line data in objective evidence of performance in the companies and agencies of operating data governance.

Quantitative Analysis for Plasma Etch Modeling Using Optical Emission Spectroscopy: Prediction of Plasma Etch Responses

  • Jeong, Young-Seon;Hwang, Sangheum;Ko, Young-Don
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.392-400
    • /
    • 2015
  • Monitoring of plasma etch processes for fault detection is one of the hallmark procedures in semiconductor manufacturing. Optical emission spectroscopy (OES) has been considered as a gold standard for modeling plasma etching processes for on-line diagnosis and monitoring. However, statistical quantitative methods for processing the OES data are still lacking. There is an urgent need for a statistical quantitative method to deal with high-dimensional OES data for improving the quality of etched wafers. Therefore, we propose a robust relevance vector machine (RRVM) for regression with statistical quantitative features for modeling etch rate and uniformity in plasma etch processes by using OES data. For effectively dealing with the OES data complexity, we identify seven statistical features for extraction from raw OES data by reducing the data dimensionality. The experimental results demonstrate that the proposed approach is more suitable for high-accuracy monitoring of plasma etch responses obtained from OES.

Quantitative Reliability Assessment for Safety Critical System Software

  • Chung, Dae-Won
    • Journal of Electrical Engineering and Technology
    • /
    • v.2 no.3
    • /
    • pp.386-390
    • /
    • 2007
  • At recent times, an essential issue in the replacement of the old analogue I&C to computer-based digital systems in nuclear power plants becomes the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software that is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We presented the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper.

A Guiding System of Visualization for Quantitative Bigdata Based on User Intention (사용자 의도 기반 정량적 빅데이터 시각화 가이드라인 툴)

  • Byun, Jung Yun;Park, Young B.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.261-266
    • /
    • 2016
  • Chart suggestion method provided by various existing data visualization tools makes chart recommendations without considering the user intention. Data visualization is not properly carried out and thus, unclear in some tools because they do not follow the segmented quantitative data classification policy. This paper provides a guideline that clearly classifies the quantitative input data and that effectively suggests charts based on user intention. The guideline is two-fold; the analysis guideline examines the quantitative data and the suggestion guideline recommends charts based on the input data type and the user intention. Following this guideline, we excluded charts in disagreement with the user intention and confirmed that the time user spends in the chart selection process has decreased.

Comparative Study of Quantitative Data Binning Methods in Association Rule

  • Choi, Jae-Ho;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.3
    • /
    • pp.903-911
    • /
    • 2008
  • Association rule mining searches for interesting relationships among items in a given large database. Association rules are frequently used by retail stores to assist in marketing, advertising, floor placement, and inventory control. Many data is most quantitative data. There is a need for partitioning techniques to quantitative data. The partitioning process is referred to as binning. We introduce several binning methods ; parameter mean binning, equi-width binning, equi-depth binning, clustering-based binning. So we apply these binning methods to several distribution types of quantitative data and present the best binning method for association rule discovery.

  • PDF

Quantitative Linguistic Analysis on Literary Works

  • Choi, Kyung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.1057-1064
    • /
    • 2007
  • From the view of natural language process, quantitative linguistic analysis is a linguistic study relying on statistical methods, and is a mathematical linguistics in an attempt to discover various linguistic characters by interpreting linguistic facts quantitatively through statistical methods. In this study, I would like to introduce a quantitative linguistic analysis method utilizing a computer and statistical methods on literary works. I also try to introduce a use of SynKDP, a synthesized Korean data process, and show the relations between distribution of linguistic unit elements which are used by the hero in a novel #Sassinamjunggi# and theme analysis on literary works.

  • PDF

Quantitative Application of TM Data in Shallow Geological Structure Reconstruction

  • Yang, Liu;Liqun, Zou;Mingxin, Liu
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1313-1315
    • /
    • 2003
  • This paper is dedicated to studying the quantitative analysis method with remote-sensing data in shallow geological structure reconstruction by the example of TM data in western China. A new method of computing attitude of geological contacts from remote-sensing data is developed and assessed. We generate several geological profiles with remotely derived measurements to constrain the shallow geological structure reconstruction in three dimensions.

  • PDF

A Case Study of Fashion Marketing Research using Multiple Methods (마케팅 리서치에서 다중측정방법에 관한 실증적 연구)

  • 박혜정;김혜정;이영주;임숙자
    • The Research Journal of the Costume Culture
    • /
    • v.10 no.6
    • /
    • pp.601-616
    • /
    • 2002
  • Qualitative research is a method widely used in marketing research. However, the method has seldom been used in fashion marketing research in Korea. The purpose of this study was to prove that using both qualitative and quantitative research methods in main stage is much useful than using qualitative research method only in exploratory stage. Qualitative data were gathered by conducting Focus Group Interview(FGI) with 48 college students. Quantitative data were gathered by surveying college students, and 487 questionnaires were used in the statistical analysis. The data were analyzed using content analysis, mean, standard deviation, and t-test. As a result, FGI, one of the tools used in qualitative research methods, was proved to be useful in revealing consumers´deep emotional needs as well as purchase motives. FGI also revealed information which quantitative research method tools such as survey could have missed. Therefore, it is best to use multiple methods-simultaneous use of quantitative and qualitative methods-to understand fast changing consumers´needs and purchase motives.

  • PDF

Quantitative Text Mining for Social Science: Analysis of Immigrant in the Articles (사회과학을 위한 양적 텍스트 마이닝: 이주, 이민 키워드 논문 및 언론기사 분석)

  • Yi, Soo-Jeong;Choi, Doo-Young
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.5
    • /
    • pp.118-127
    • /
    • 2020
  • The paper introduces trends and methodological challenges of quantitative Korean text analysis by using the case studies of academic and news media articles on "migration" and "immigration" within the periods of 2017-2019. The quantitative text analysis based on natural language processing technology (NLP) and this became an essential tool for social science. It is a part of data science that converts documents into structured data and performs hypothesis discovery and verification as the data and visualize data. Furthermore, we examed the commonly applied social scientific statistical models of quantitative text analysis by using Natural Language Processing (NLP) with R programming and Quanteda.