• Title/Summary/Keyword: Statistical data

Search Result 14,779, Processing Time 0.035 seconds

Development and Application of Statistical Programs Based on Data and Artificial Intelligence Prediction Model to Improve Statistical Literacy of Elementary School Students (초등학생의 통계적 소양 신장을 위한 데이터와 인공지능 예측모델 기반의 통계프로그램 개발 및 적용)

  • Kim, Yunha;Chang, Hyewon
    • Communications of Mathematical Education
    • /
    • v.37 no.4
    • /
    • pp.717-736
    • /
    • 2023
  • The purpose of this study is to develop a statistical program using data and artificial intelligence prediction models and apply it to one class in the sixth grade of elementary school to see if it is effective in improving students' statistical literacy. Based on the analysis of problems in today's elementary school statistical education, a total of 15 sessions of the program was developed to encourage elementary students to experience the entire process of statistical problem solving and to make correct predictions by incorporating data, the core in the era of the Fourth Industrial Revolution into AI education. The biggest features of this program are the recognition of the importance of data, which are the key elements of artificial intelligence education, and the collection and analysis activities that take into account context using real-life data provided by public data platforms. In addition, since it consists of activities to predict the future based on data by using engineering tools such as entry and easy statistics, and creating an artificial intelligence prediction model, it is composed of a program focused on the ability to develop communication skills, information processing capabilities, and critical thinking skills. As a result of applying this program, not only did the program positively affect the statistical literacy of elementary school students, but we also observed students' interest, critical inquiry, and mathematical communication in the entire process of statistical problem solving.

A Study of Non-parametric Statistical Tests to Analyze Trend in Water Quality Data (수질자료의 추세분석을 위한 비모수적 통계검정에 관한 연구)

  • Lee, Sang-Hoon
    • Journal of Environmental Impact Assessment
    • /
    • v.4 no.2
    • /
    • pp.93-103
    • /
    • 1995
  • This study was carried out to suggest the best statistical test to analyze the trend in monthly water quality data. Traditional parametric tests such as t-test and regression analysis are based on the assumption that the underlying population has a normal distribution and regression analysis additionally assumes that residual errors are independent. Analyzing 9-years monthly COD data collected at Paldang in Han River, the underlying population was found to be neither normal nor independent. Therefore parametric tests are invalid for trend detection. Four Kinds of nonparametric statistical tests, such as Run Test, Daniel test, Mann-Kendall test, and Time Series Residual Analysis were applied to analyze the trend in the COD data, Daniel test and Mann-Kendall test indicated upward trend in COD data. The best nonparametric test was suggested to be Daniel test, which is simple in computation and easy to understand the intuitive meaning.

  • PDF

Optimization of Robust Design Model using Data Mining (데이터 바이닝을 이용한 로버스트 설계 모형의 최적화)

  • Jung, Hey-Jin;Koo, Bon-Cheol
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.30 no.2
    • /
    • pp.99-105
    • /
    • 2007
  • According to the automated manufacturing processes followed by the development of computer manufacturing technologies, products or quality characteristics produced on the processes have measured and recorded automatically. Much amount of data daily produced on the processes may not be efficiently analyzed by current statistical methodologies (i.e., statistical quality control and statistical process control methodologies) because of the dimensionality associated with many input and response variables. Although a number of statistical methods to handle this situation, there is room for improvement. In order to overcome this limitation, we integrated data mining and robust design approach in this research. We find efficiently the significant input variables that connected with the interesting response variables by using the data mining technique. And we find the optimum operating condition of process by using RSM and robust design approach.

Statistical Representation Methods of Ground Data (지반조사 데이터의 통계처리기법)

  • Lee, Kyu-Hwan;Yoon, Gil-Lim
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2008.10a
    • /
    • pp.85-110
    • /
    • 2008
  • Ground investigation data to be used as a basis for geotechnical analysis and foundation design are usually troubled with large uncertainty, due to natural variability and limited number of data. Statistical methods can be a rational tool for handling such uncertain ground data, in particular with a view to the selection of characteristic values for estimating ground design parameters used in design. The characteristic values of soil properties for use in geotechnical design have oftenly based on not only a subjective judgment but also engineer's past acumulated experience. This paper discussed some statistical methods which can handle such intrinsic ground uncertainty data with a case design in a rational manner.

  • PDF

INVITED PAPER MULTIVARIATE ANALYSIS FOR THE CASE WHEN THE DIMENSION IS LARGE COMPARED TO THE SAMPLE SIZE

  • Fujikoshi, Yasunori
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.1
    • /
    • pp.1-24
    • /
    • 2004
  • This paper is concerned with statistical methods for multivariate data when the number p of variables is large compared to the sample size n. Such data appear typically in analysis of DNA microarrays, curve data, financial data, etc. However, there is little statistical theory for high dimensional data. On the other hand, there are some asymptotic results under the assumption that both and p tend to $\infty$, in some ratio p/n ${\rightarrow}$c. The results suggest that the new asymptotic results are more useful and insightful than the classical large sample asymptotics. The main purpose of this paper is to review some asymptotic results for high dimensional statistics as well as classical statistics under a high dimensional asymptotic framework.

Bioequivalence trial with two generic drugs in 2 × 3 crossover design with missing data

  • Park, Sang-Gue;Kim, Seunghyo;Choi, Ikjoon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.6
    • /
    • pp.641-647
    • /
    • 2020
  • The 2 × 3 crossover design, a modified version of the 3 × 3 crossover design, is considered to compare the bioavailability of two generic candidates with a reference drug. The 2 × 3 crossover design is more economically favorable due to decrease in the number of sequences, rather than conducting a 3×3 crossover trial or two separate 2 × 2 crossover trials. However, when using a higher-order crossover trial, the risk of drop-outs and withdrawals of subjects increases, so the suitable statistical inferences for missing data is needed. The bioequivalence model of a of 2×3 crossover trial with missing data is defined and the statistical procedures of assessing bioequivalence is proposed. An illustrated example of the 2 × 3 trial with missing data is also presented with discussion.

A Case Study of the Characteristics of Mathematically Gifted Elementary Students' Statistical Reasoning : Focus on the Recognition of Variability (초등수학영재들의 통계적 사고 특성 사례 분석: 변이성에 대한 인식을 중심으로)

  • Lee, Hyung-Sook;Lee, Kyeong-Hwa;Kim, Ji-Won
    • Journal of Educational Research in Mathematics
    • /
    • v.20 no.3
    • /
    • pp.339-356
    • /
    • 2010
  • It is important for children to develop statistical reasoning as they think through data. In particular, it is imperative to provide children instructional situations in which they are encouraged to consider variability in data because the ability to reason about variability is fundamental to the development of statistical reasoning. Many researchers argue that even highperforming mathematics students show low levels of statistical reasoning; interventions attending to pedagogical concerns about child ren's statistical reasoning are, thus, necessary. The purpose of this study was to investigate 15 gifted elementary students' various ways of understanding important statistical concepts, with particular attention given to 3 students' reasoning about data that emerged as they engaged in the process of generating and graphing data. Analysis revealed that in recognizing variability in a context involving data, mathematically gifted students did not show any difference from previous results with general students. The authors suggest that our current statistics education may not help elementary students understand variability in their development of statistical reasoning.

  • PDF

Reliability Analysis Using Parametric and Nonparametric Input Modeling Methods (모수적·비모수적 입력모델링 기법을 이용한 신뢰성 해석)

  • Kang, Young-Jin;Hong, Jimin;Lim, O-Kaung;Noh, Yoojeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.1
    • /
    • pp.87-94
    • /
    • 2017
  • Reliability analysis(RA) and Reliability-based design optimization(RBDO) require statistical modeling of input random variables, which is parametrically or nonparametrically determined based on experimental data. For the parametric method, goodness-of-fit (GOF) test and model selection method are widely used, and a sequential statistical modeling method combining the merits of the two methods has been recently proposed. Kernel density estimation(KDE) is often used as a nonparametric method, and it well describes a distribution function when the number of data is small or a density function has multimodal distribution. Although accurate statistical models are needed to obtain accurate RA and RBDO results, accurate statistical modeling is difficult when the number of data is small. In this study, the accuracy of two statistical modeling methods, SSM and KDE, were compared according to the number of data. Through numerical examples, the RA results using the input models modeled by two methods were compared, and appropriate modeling method was proposed according to the number of data.

TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data

  • Lim, Jae Hyun;Lee, Soo Youn;Kim, Ju Han
    • Genomics & Informatics
    • /
    • v.15 no.1
    • /
    • pp.51-53
    • /
    • 2017
  • High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

Inappropriate Survey Design Analysis of the Korean National Health and Nutrition Examination Survey May Produce Biased Results

  • Kim, Yangho;Park, Sunmin;Kim, Nam-Soo;Lee, Byung-Kook
    • Journal of Preventive Medicine and Public Health
    • /
    • v.46 no.2
    • /
    • pp.96-104
    • /
    • 2013
  • Objectives: The inherent nature of the Korean National Health and Nutrition Examination Survey (KNHANES) design requires special analysis by incorporating sample weights, stratification, and clustering not used in ordinary statistical procedures. Methods: This study investigated the proportion of research papers that have used an appropriate statistical methodology out of the research papers analyzing the KNHANES cited in the PubMed online system from 2007 to 2012. We also compared differences in mean and regression estimates between the ordinary statistical data analyses without sampling weight and design-based data analyses using the KNHANES 2008 to 2010. Results: Of the 247 research articles cited in PubMed, only 19.8% of all articles used survey design analysis, compared with 80.2% of articles that used ordinary statistical analysis, treating KNHANES data as if it were collected using a simple random sampling method. Means and standard errors differed between the ordinary statistical data analyses and design-based analyses, and the standard errors in the design-based analyses tended to be larger than those in the ordinary statistical data analyses. Conclusions: Ignoring complex survey design can result in biased estimates and overstated significance levels. Sample weights, stratification, and clustering of the design must be incorporated into analyses to ensure the development of appropriate estimates and standard errors of these estimates.