• Title/Summary/Keyword: method: data analysis

Search Result 22,164, Processing Time 0.059 seconds

Method of Processing the Outliers and Missing Values of Field Data to Improve RAM Analysis Accuracy (RAM 분석 정확도 향상을 위한 야전운용 데이터의 이상값과 결측값 처리 방안)

  • Kim, In Seok;Jung, Won
    • Journal of Applied Reliability
    • /
    • v.17 no.3
    • /
    • pp.264-271
    • /
    • 2017
  • Purpose: Field operation data contains missing values or outliers due to various causes of the data collection process, so caution is required when utilizing RAM analysis results by field operation data. The purpose of this study is to present a method to minimize the RAM analysis error of the field data to improve the accuracy. Methods: Statistical methods are presented for processing of the outliers and the missing values of the field operating data, and after analyzing the RAM, the differences between before and after applying the technique are discussed. Results: The availability is estimated to be lower by 6.8 to 23.5% than that before processing, and it is judged that the processing of the missing values and outliers greatly affect the RAM analysis result. Conclusion: RAM analysis of OO weapon system was performed and suggestions for improvement of RAM analysis were presented through comparison with the new and current method. Data analysis results without appropriate treatment of error values may result in incorrect conclusions leading to inappropriate decisions and actions.

Cluster analysis by month for meteorological stations using a gridded data of numerical model with temperatures and precipitation (기온과 강수량의 수치모델 격자자료를 이용한 기상관측지점의 월별 군집화)

  • Kim, Hee-Kyung;Kim, Kwang-Sub;Lee, Jae-Won;Lee, Yung-Seop
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.1133-1144
    • /
    • 2017
  • Cluster analysis with meteorological data allows to segment meteorological region based on meteorological characteristics. By the way, meteorological observed data are not adequate for cluster analysis because meteorological stations which observe the data are located not uniformly. Therefore the clustering of meteorological observed data cannot reflect the climate characteristic of South Korea properly. The clustering of $5km{\times}5km$ gridded data derived from a numerical model, on the other hand, reflect it evenly. In this study, we analyzed long-term grid data for temperatures and precipitation using cluster analysis. Due to the monthly difference of climate characteristics, clustering was performed by month. As the result of K-Means cluster analysis is so sensitive to initial values, we used initial values with Ward method which is hierarchical cluster analysis method. Based on clustering of gridded data, cluster of meteorological stations were determined. As a result, clustering of meteorological stations in South Korea has been made spatio-temporal segmentation.

A Study on Prescription Similarity Analysis for Efficiency Improvement (처방 유사도 분석의 효율성 향상에 관한 연구)

  • Hwang, SuKyung;Woo, DongHyeon;Kim, KiWook;Lee, ByungWook
    • Journal of Korean Medical classics
    • /
    • v.35 no.4
    • /
    • pp.1-9
    • /
    • 2022
  • Objectives : This study aims to increase efficiency of the prescription similarity analysis method that uses drug composition ratio. Methods : The controlled experiment compared result generation time, generated data quantity, and accuracy of results between previous and new analysis method on the 12,598 formulas and 61 prescription groups. Results : The control group took 346 seconds on average and generated 768,478 results, while the test group took 24 seconds and generated 241,739 results. The test group adopted a selective calculation method that only used overlapping data between two formulas instead of analyzing all number of cases. It simplified the data processing process, reducing the quantity of data that is required to be processed, leading to better system speed, as fast as 14.47 times more than previous analysis method with equal results. Conclusions : Efficiency for similarity analysis could be improved by reducing data span and simplifying the calculation processes.

A Big Data Analysis of Yumentingzheng: Weiwenqiju as an Example (어문청정 빅데이터 분석: 위문기거 일례)

  • Snowberger, Aaron Daniel;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.624-626
    • /
    • 2021
  • Yumentingzheng, which records the contents of the Qing dynasty's discussions with his subjects, is an important document like the Annals of Joseon in Korea. This paper describes the method and steps for big data analysis of Yumentingzheng written in Manchu alphabet. In big data analysis of documents written in Manchu characters, there are many problems that need to be solved in advance, and research on these should be preceded. In this paper, a method of big data analysis using the R language was proposed in the stage where the text written in Manchurian characters was transliterated into Latin characters through a preliminary study to be conducted in the future. In the proposed method, Apkai method was adopted for the transliteration of Wumentingzheng, and the results of big data analysis were presented using the text of Weiwenqiju.

  • PDF

Rearch of Late Adolcent Activity based on Using Big Data Analysis

  • Hye-Sun, Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.361-368
    • /
    • 2022
  • This study seeks to determine the research trend of late adolescents by utilizing big data. Also, seek for research trends related to activity participation, treatment, and mediation to provide academic implications. For this process, gathered 1.000 academic papers and used TF-IDF analysis method, and the topic modeling based on co-occurrence word network analysis method LDA (Latent Dirichlet Allocation) to analyze. In conclusion this study conducted analysis of activity participation, treatment, and mediation of late adolescents by TF-IDF analysis method, co-occurrence word network analysis method, and topic modeling analysis based on LDA(Latent Dirichlet Allocation). The results were proposed through visualization, and carries significance as this study analyzed activity, treatment, mediation factors of late adolescents, and provides new analysis methods to figure out the basic materials of activity participation trends, treatment, and mediation of late adolescents.

Improving Data Input of ECO2-OD Program Utilizing BIM (BIM을 이용한 ECO2-OD 프로그램의 정보입력 개선)

  • Kang, Min-Su;Kim, Ka-Ram;Yu, Jung-Ho
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2013.05a
    • /
    • pp.205-207
    • /
    • 2013
  • In a situation that building energy consumption is increasing worldwide, the research utilizing BIM technology to analyze building energy has been actively conducted. On the other hand, data input method of the building energy analysis has been still manually entered. This paper proposed a improved input method of required information for building energy analysis using the ECO2-OD program. As a result, although some required information of BIM based design software could be almost entered when it comes to general information and architectural sector, it has a problem to be handled in HVAC sector. Therefore, in the both of general and architectural sectors, the BIM information from the BIM-based design software could be directly used to automatically and systematically input the information. Future research should be studied the algorism and method in connection with data exchange to utilize input method of ECO2-OD from BIM data.

  • PDF

Results of Discriminant Analysis with Respect to Cluster Analyses Under Dimensional Reduction

  • Chae, Seong-San
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.2
    • /
    • pp.543-553
    • /
    • 2002
  • Principal component analysis is applied to reduce p-dimensions into q-dimensions ( $q {\leq} p$). Any partition of a collection of data points with p and q variables generated by the application of six hierarchical clustering methods is re-classified by discriminant analysis. From the application of discriminant analysis through each hierarchical clustering method, correct classification ratios are obtained. The results illustrate which method is more reasonable in exploratory data analysis.

New fuzzy method in choosing Ground Motion Prediction Equation (GMPE) in probabilistic seismic hazard analysis

  • Mahmoudi, Mostafa;Shayanfar, MohsenAli;Barkhordari, Mohammad Ali;Jahani, Ehsan
    • Earthquakes and Structures
    • /
    • v.10 no.2
    • /
    • pp.389-408
    • /
    • 2016
  • Recently, seismic hazard analysis has become a very significant issue. New systems and available data have been also developed that could help scientists to explain the earthquakes phenomena and its physics. Scientists have begun to accept the role of uncertainty in earthquake issues and seismic hazard analysis. However, handling the existing uncertainty is still an important problem and lack of data causes difficulties in precisely quantifying uncertainty. Ground Motion Prediction Equation (GMPE) values are usually obtained in a statistical method: regression analysis. Each of these GMPEs uses the preliminary data of the selected earthquake. In this paper, a new fuzzy method was proposed to select suitable GMPE at every intensity (earthquake magnitude) and distance (site distance to fault) according to preliminary data aggregation in their area using ${\alpha}$ cut. The results showed that the use of this method as a GMPE could make a significant difference in probabilistic seismic hazard analysis (PSHA) results instead of selecting one equation or using logic tree. Also, a practical example of this new method was described in Iran as one of the world's earthquake-prone areas.

Finding the Optimal Data Classification Method Using LDA and QDA Discriminant Analysis

  • Kim, SeungJae;Kim, SungHwan
    • Journal of Integrative Natural Science
    • /
    • v.13 no.4
    • /
    • pp.132-140
    • /
    • 2020
  • With the recent introduction of artificial intelligence (AI) technology, the use of data is rapidly increasing, and newly generated data is also rapidly increasing. In order to obtain the results to be analyzed based on these data, the first thing to do is to classify the data well. However, when classifying data, if only one classification technique belonging to the machine learning technique is applied to classify and analyze it, an error of overfitting can be accompanied. In order to reduce or minimize the problems caused by misclassification of the classification system such as overfitting, it is necessary to derive an optimal classification by comparing the results of each classification by applying several classification techniques. If you try to interpret the data with only one classification technique, you will have poor reasoning and poor predictions of results. This study seeks to find a method for optimally classifying data by looking at data from various perspectives and applying various classification techniques such as LDA and QDA, such as linear or nonlinear classification, as a process before data analysis in data analysis. In order to obtain the reliability and sophistication of statistics as a result of big data analysis, it is necessary to analyze the meaning of each variable and the correlation between the variables. If the data is classified differently from the hypothesis test from the beginning, even if the analysis is performed well, unreliable results will be obtained. In other words, prior to big data analysis, it is necessary to ensure that data is well classified to suit the purpose of analysis. This is a process that must be performed before reaching the result by analyzing the data, and it may be a method of optimal data classification.

On principal component analysis for interval-valued data (구간형 자료의 주성분 분석에 관한 연구)

  • Choi, Soojin;Kang, Kee-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.61-74
    • /
    • 2020
  • Interval-valued data, one type of symbolic data, are observed in the form of intervals rather than single values. Each interval-valued observation has an internal variation. Principal component analysis reduces the dimension of data by maximizing the variance of data. Therefore, the principal component analysis of the interval-valued data should account for the variance between observations as well as the variation within the observed intervals. In this paper, three principal component analysis methods for interval-valued data are summarized. In addition, a new method using a truncated normal distribution has been proposed instead of a uniform distribution in the conventional quantile method, because we believe think there is more information near the center point of the interval. Each method is compared using simulations and the relevant data set from the OECD. In the case of the quantile method, we draw a scatter plot of the principal component, and then identify the position and distribution of the quantiles by the arrow line representation method.