• Title/Summary/Keyword: Data-based analysis

Search Result 30,591, Processing Time 0.06 seconds

K-means Clustering using Grid-based Representatives

  • Park, Hee-Chang;Lee, Sun-Myung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.759-768
    • /
    • 2005
  • K-means clustering has been widely used in many applications, such that pattern analysis, data analysis, market research and so on. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters, because it is more primitive and explorative. In this paper we propose a new method of k-means clustering using the grid-based representative value(arithmetic and trimmed mean) for sample. It is more fast than any traditional clustering method and maintains its accuracy.

  • PDF

A Channel Equalization Algorithm Using Neural Network Based Data Least Squares (뉴럴네트웍에 기반한 Data Least Squares를 사용한 채널 등화기 알고리즘)

  • Lim, Jun-Seok;Pyeon, Yong-Kuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.63-68
    • /
    • 2007
  • Using the neural network model for oriented principal component analysis (OPCA), we propose a solution to the data least squares (DLS) problem, in which the error is assumed to lie in the data matrix only. In this paper, we applied this neural network model to channel equalization. Simulations show that the neural network based DLS outperforms ordinary least squares in channel equalization problems.

Analysis of CSR·CSV·ESG Research Trends - Based on Big Data Analysis - (CSR·CSV·ESG 연구 동향 분석 - 빅데이터 분석을 중심으로 -)

  • Lee, Eun Ji;Moon, Jaeyoung
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.4
    • /
    • pp.751-776
    • /
    • 2022
  • Purpose: The purpose of this paper is to present implications by analyzing research trends on CSR, CSV and ESG by text analysis and visual analysis(Comprehensive/ Fields / Years-based) which are big data analyses, by collecting data based on previous studies on CSR, CSV and ESG. Methods: For the collection of analysis data, deep learning was used in the integrated search on the Academic Research Information Service (www.riss.kr) to search for "CSR", "CSV" and "ESG" as search terms, and the Korean abstracts and keyword were scrapped out of the extracted paper and they are organize into EXCEL. For the final step, CSR 2,847 papers, CSV 395 papers, ESG 555 papers derived were analyzed using the Rx64 4.0.2 program and Rstudio using text mining, one of the big data analysis techniques, and Word Cloud for visualization. Results: The results of this study are as follows; CSR, CSV, and ESG studies showed that research slowed down somewhat before 2010, but research increased rapidly until recently in 2019. Research have been found to be heavily researched in the fields of social science, art and physical education, and engineering. As a result of the study, there were many keyword of 'corporate', 'social', and 'responsibility', which were similar in the word cloud analysis. Looking at the frequent keyword and word cloud analysis by field and year, overall keyword were derived similar to all keyword by year. However, some differences appeared in each field. Conclusion: Government support and expert support for CSR, CSV and ESG should be activated, and researches on technology-based strategies are needed. In the future, it is necessary to take various approaches to them. If researches are conducted in consideration of the environment or energy, it is judged that bigger implications can be presented.

Bayesian-based seismic margin assessment approach: Application to research reactor

  • Kwag, Shinyoung;Oh, Jinho;Lee, Jong-Min;Ryu, Jeong-Soo
    • Earthquakes and Structures
    • /
    • v.12 no.6
    • /
    • pp.653-663
    • /
    • 2017
  • A seismic margin assessment evaluates how much margin exists for the system under beyond design basis earthquake events. Specifically, the seismic margin for the entire system is evaluated by utilizing a systems analysis based on the sub-system and component seismic fragility data. Each seismic fragility curve is obtained by using empirical, experimental, and/or numerical simulation data. The systems analysis is generally performed by employing a fault tree analysis. However, the current practice has clear limitations in that it cannot deal with the uncertainties of basic components and accommodate the newly observed data. Therefore, in this paper, we present a Bayesian-based seismic margin assessment that is conducted using seismic fragility data and fault tree analysis including Bayesian inference. This proposed approach is first applied to the pooltype nuclear research reactor system for the quantitative evaluation of the seismic margin. The results show that the applied approach can allow updating by considering the newly available data/information at any level of the fault tree, and can identify critical scenarios modified due to new information. Also, given the seismic hazard information, this approach is further extended to the real-time risk evaluation. Thus, the proposed approach can finally be expected to solve the fundamental restrictions of the current method.

A Universal Analysis Pipeline for Hybrid Capture-Based Targeted Sequencing Data with Unique Molecular Indexes

  • Kim, Min-Jung;Kim, Si-Cho;Kim, Young-Joon
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.29.1-29.5
    • /
    • 2018
  • Hybrid capture-based targeted sequencing is being used increasingly for genomic variant profiling in tumor patients. Unique molecular index (UMI) technology has recently been developed and helps to increase the accuracy of variant calling by minimizing polymerase chain reaction biases and sequencing errors. However, UMI-adopted targeted sequencing data analysis is slightly different from the methods for other types of omics data, and its pipeline for variant calling is still being optimized in various study groups for their own purposes. Due to this provincial usage of tools, our group built an analysis pipeline for global application to many studies of targeted sequencing generated with different methods. First, we generated hybrid capture-based data using genomic DNA extracted from tumor tissues of colorectal cancer patients. Sequencing libraries were prepared and pooled together, and an 8-plexed capture library was processed to the enrichment step before 150-bp paired-end sequencing with Illumina HiSeq series. For the analysis, we evaluated several published tools. We focused mainly on the compatibility of the input and output of each tool. Finally, our laboratory built an analysis pipeline specialized for UMI-adopted data. Through this pipeline, we were able to estimate even on-target rates and filtered consensus reads for more accurate variant calling. These results suggest the potential of our analysis pipeline in the precise examination of the quality and efficiency of conducted experiments.

IMPROVING SOCIAL MEDIA DATA QUALITY FOR EFFECTIVE ANALYTICS: AN EMPIRICAL INVESTIGATION BASED ON E-BDMS

  • B. KARTHICK;T. MEYYAPPAN
    • Journal of applied mathematics & informatics
    • /
    • v.41 no.5
    • /
    • pp.1129-1143
    • /
    • 2023
  • Social media platforms have become an integral part of our daily lives, and they generate vast amounts of data that can be analyzed for various purposes. However, the quality of the data obtained from social media is often questionable due to factors such as noise, bias, and incompleteness. Enhancing data quality is crucial to ensure the reliability and validity of the results obtained from such data. This paper proposes an enhanced decision-making framework based on Business Decision Management Systems (BDMS) that addresses these challenges by incorporating a data quality enhancement component. The framework includes a backtracking method to improve plan failures and risk-taking abilities and a steep optimized strategy to enhance training plan and resource management, all of which contribute to improving the quality of the data. We examine the efficacy of the proposed framework through research data, which provides evidence of its ability to increase the level of effectiveness and performance by enhancing data quality. Additionally, we demonstrate the reliability of the proposed framework through simulation analysis, which includes true positive analysis, performance analysis, error analysis, and accuracy analysis. This research contributes to the field of business intelligence by providing a framework that addresses critical data quality challenges faced by organizations in decision-making environments.

Extension and Case Analysis of Topic Modeling for Inductive Social Science Research Methodology (귀납적 사회과학연구 방법론을 위한 토픽모델링의 확장 및 사례분석)

  • Kim, Keun Hyung
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.25-45
    • /
    • 2022
  • Purpose In this paper, we propose the method to extend topic modeling techniques in order to derive data-based research hypotheses when establishing research hypotheses for social sciences, As a concept in contrast to the existing deductive hypothesis establishment methodology for the social science research, the topic modeling technique was expanded to enable the so-called inductive hypothesis establishment methodology, and an analysis case of the Seongsan Ilchulbong online review based on the proposed methodology was presented. Design/methodology/approach In this paper, an extension architecture and extension algorithm in the form of extending the existing topic modeling were proposed. The extended architecture and algorithm include data processing method based on topic ratio in document, correlation analysis and regression analysis of processed data for topics derived by existing topic modeling. In addition, in this paper, an analysis case of the online review of Seongsan Ilchulbong Peak was presented by applying the extended topic modeling algorithm. An exploratory analysis was performed on the Seongsan Ilchulbong online reviews through the basic text analysis. The data was transformed into 5-point scale to enable correlation and regression analysis based on the topic ratio in each online review. A regression analysis was performed using the derived topics as the independent variable and the review rating as the dependent variable, and hypotheses could be derived based on this, which enable the so-called inductive hypothesis establishment. Findings This paper is meaningful in that it confirmed the possibility of deriving a causal model and setting an inductive hypothesis through an extended analysis of topic modeling.

A Study of Main Contents Extraction from Web News Pages based on XPath Analysis

  • Sun, Bok-Keun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.1-7
    • /
    • 2015
  • Although data on the internet can be used in various fields such as source of data of IR(Information Retrieval), Data mining and knowledge information servece, and contains a lot of unnecessary information. The removal of the unnecessary data is a problem to be solved prior to the study of the knowledge-based information service that is based on the data of the web page, in this paper, we solve the problem through the implementation of XTractor(XPath Extractor). Since XPath is used to navigate the attribute data and the data elements in the XML document, the XPath analysis to be carried out through the XTractor. XTractor Extracts main text by html parsing, XPath grouping and detecting the XPath contains the main data. The result, the recognition and precision rate are showed in 97.9%, 93.9%, except for a few cases in a large amount of experimental data and it was confirmed that it is possible to properly extract the main text of the news.

A New Study on Vibration Data Acquisition and Intelligent Fault Diagnostic System for Aero-engine

  • Ding, Yongshan;Jiang, Dongxiang
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2008.03a
    • /
    • pp.16-21
    • /
    • 2008
  • Aero-engine, as one kind of rotating machinery with complex structure and high rotating speed, has complicated vibration faults. Therefore, condition monitoring and fault diagnosis system is very important for airplane security. In this paper, a vibration data acquisition and intelligent fault diagnosis system is introduced. First, the vibration data acquisition part is described in detail. This part consists of hardware acquisition modules and software analysis modules which can realize real-time data acquisition and analysis, off-line data analysis, trend analysis, fault simulation and graphical result display. The acquisition vibration data are prepared for the following intelligent fault diagnosis. Secondly, two advanced artificial intelligent(AI) methods, mapping-based and rule-based, are discussed. One is artificial neural network(ANN) which is an ideal tool for aero-engine fault diagnosis and has strong ability to learn complex nonlinear functions. The other is data mining, another AI method, has advantages of discovering knowledge from massive data and automatically extracting diagnostic rules. Thirdly, lots of historical data are used for training the ANN and extracting rules by data mining. Then, real-time data are input into the trained ANN for mapping-based fault diagnosis. At the same time, extracted rules are revised by expert experience and used for rule-based fault diagnosis. From the results of the experiments, the conclusion is obvious that both the two AI methods are effective on aero-engine vibration fault diagnosis, while each of them has its individual quality. The whole system can be developed in local vibration monitoring and real-time fault diagnosis for aero-engine.

  • PDF

Performance Analysis and Identifying Characteristics of Processing-in-Memory System with Polyhedral Benchmark Suite (프로세싱 인 메모리 시스템에서의 PolyBench 구동에 대한 동작 성능 및 특성 분석과 고찰)

  • Jeonggeun Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.142-148
    • /
    • 2023
  • In this paper, we identify performance issues in executing compute kernels from PolyBench, which includes compute kernels that are the core computational units of various data-intensive workloads, such as deep learning and data-intensive applications, on Processing-in-Memory (PIM) devices. Therefore, using our in-house simulator, we measured and compared the various performance metrics of workloads based on traditional out-of-order and in-order processors with Processing-in-Memory-based systems. As a result, the PIM-based system improves performance compared to other computing models due to the short-term data reuse characteristic of computational kernels from PolyBench. However, some kernels perform poorly in PIM-based systems without a multi-layer cache hierarchy due to some kernel's long-term data reuse characteristics. Hence, our evaluation and analysis results suggest that further research should consider dynamic and workload pattern adaptive approaches to overcome performance degradation from computational kernels with long-term data reuse characteristics and hidden data locality.

  • PDF