• Title/Summary/Keyword: tabular data

Search Result 47, Processing Time 0.026 seconds

Recent deep learning methods for tabular data

  • Yejin Hwang;Jongwoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.215-226
    • /
    • 2023
  • Deep learning has made great strides in the field of unstructured data such as text, images, and audio. However, in the case of tabular data analysis, machine learning algorithms such as ensemble methods are still better than deep learning. To keep up with the performance of machine learning algorithms with good predictive power, several deep learning methods for tabular data have been proposed recently. In this paper, we review the latest deep learning models for tabular data and compare the performances of these models using several datasets. In addition, we also compare the latest boosting methods to these deep learning methods and suggest the guidelines to the users, who analyze tabular datasets. In regression, machine learning methods are better than deep learning methods. But for the classification problems, deep learning methods perform better than the machine learning methods in some cases.

Classification of Tabular Data using High-Dimensional Mapping and Deep Learning Network (고차원 매핑기법과 딥러닝 네트워크를 통한 정형데이터의 분류)

  • Kyeong-Taek Kim;Won-Du Chang
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • Deep learning has recently demonstrated conspicuous efficacy across diverse domains than traditional machine learning techniques, as the most popular approach for pattern recognition. The classification problems for tabular data, however, are remain for the area of traditional machine learning. This paper introduces a novel network module designed to tabular data into high-dimensional tensors. The module is integrated into conventional deep learning networks and subsequently applied to the classification of structured data. The proposed method undergoes training and validation on four datasets, culminating in an average accuracy of 90.22%. Notably, this performance surpasses that of the contemporary deep learning model, TabNet, by 2.55%p. The proposed approach acquires significance by virtue of its capacity to harness diverse network architectures, renowned for their superior performance in the domain of computer vision, for the analysis of tabular data.

Test Dataset for validating the meaning of Table Machine Reading Language Model (표 기계독해 언어 모형의 의미 검증을 위한 테스트 데이터셋)

  • YU, Jae-Min;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.164-167
    • /
    • 2022
  • In table Machine comprehension, the knowledge required for language models or the structural form of tables changes depending on the domain, showing a greater performance degradation compared to text data. In this paper, we propose a pre-learning data construction method and an adversarial learning method through meaningful tabular data selection for constructing a pre-learning table language model robust to these domain changes in table machine reading. In order to detect tabular data sed for decoration of web documents without structural information from the extracted table data, a rule through heuristic was defined to identify head data and select table data was applied. An adversarial learning method between tabular data and infobax data with knowledge information about entities was applied. When the data was refined compared to when it was trained with the existing unrefined data, F1 3.45 and EM 4.14 increased in the KorQuAD table data, and F1 19.38, EM 4.22 compared to when the data was not refined in the Spec table QA data showed increased performance.

  • PDF

Korean TableQA: Structured data question answering based on span prediction style with S3-NET

  • Park, Cheoneum;Kim, Myungji;Park, Soyoon;Lim, Seungyoung;Lee, Jooyoul;Lee, Changki
    • ETRI Journal
    • /
    • v.42 no.6
    • /
    • pp.899-911
    • /
    • 2020
  • The data in tables are accurate and rich in information, which facilitates the performance of information extraction and question answering (QA) tasks. TableQA, which is based on tables, solves problems by understanding the table structure and searching for answers to questions. In this paper, we introduce both novice and intermediate Korean TableQA tasks that involve deducing the answer to a question from structured tabular data and using it to build a question answering pair. To solve Korean TableQA tasks, we use S3-NET, which has shown a good performance in machine reading comprehension (MRC), and propose a method of converting structured tabular data into a record format suitable for MRC. Our experimental results show that the proposed method outperforms a baseline in both the novice task (exact match (EM) 96.48% and F1 97.06%) and intermediate task (EM 99.30% and F1 99.55%).

Investigations into Coarsening Continuous Variables

  • Jeong, Dong-Myeong;Kim, Jay-J.
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.325-333
    • /
    • 2010
  • Protection against disclosure of survey respondents' identifiable and/or sensitive information is a prerequisite for statistical agencies that release microdata files from their sample surveys. Coarsening is one of popular methods for protecting the confidentiality of the data. Grouped data can be released in the form of microdata or tabular data. Instead of releasing the data in a tabular form only, having microdata available to the public with interval codes with their representative values greatly enhances the utility of the data. It allows the researchers to compute covariance between the variables and build statistical models or to run a variety of statistical tests on the data. It may be conjectured that the variance of the interval data is lower that of the ungrouped data in the sense that the coarsened data do not have the within interval variance. This conjecture will be investigated using the uniform and triangular distributions. Traditionally, midpoint is used to represent all the values in an interval. This approach implicitly assumes that the data is uniformly distributed within each interval. However, this assumption may not hold, especially in the last interval of the economic data. In this paper, we will use three distributional assumptions - uniform, Pareto and lognormal distribution - in the last interval and use either midpoint or median for other intervals for wage and food costs of the Statistics Korea's 2006 Household Income and Expenditure Survey(HIES) data and compare these approaches in terms of the first two moments.

Development of RESTful Web Service for Loading Data focusing on Daily Meteorological Data (데이터 로딩 자동화를 위한 RESTful 웹서비스 개발 - 일별 기상자료 처리를 중심으로 -)

  • Kim, Taegon;Lee, JeongJae;Nam, Won-Ho;Suh, Kyo
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.6
    • /
    • pp.93-102
    • /
    • 2014
  • Generally data loading is a laborous job to develop models. Meteorological data is basic input data for hydrological models, it is provided through websites of Korea Meteorological Administration (KMA). The website of KMA provides daily meteorological observation data with tabular format classified by years, items, stations. It is cumbersome to manipulate tabular format for model inputs such as time series and multi-item or multi-station data. The provider oriented services which broadcast restricted formed information have caused inconvenient processes. Tim O'Reilly introduces "Web 2.0" which focuses on providing a service based on data. The top ranked IT companies such as google, yahoo, daum, and naver provide customer oriented services with Open API (Application Programming Interface). A RESTful web service, typical implementation for Open API, consists URI request and HTTP response which are simple and light weight protocol than SOAP (Simple Object Access Protocol). The aim of this study is to develop a web-based service that helps loading data for human use instead of machine use. In this study, the developed RESTful web service provides Open API for manipulating meteorological data. The proposed Open API can easily access from spreadsheet programs, web browsers, and various programming environments.

Study of oversampling algorithms for soil classifications by field velocity resistivity probe

  • Lee, Jong-Sub;Park, Junghee;Kim, Jongchan;Yoon, Hyung-Koo
    • Geomechanics and Engineering
    • /
    • v.30 no.3
    • /
    • pp.247-258
    • /
    • 2022
  • A field velocity resistivity probe (FVRP) can measure compressional waves, shear waves and electrical resistivity in boreholes. The objective of this study is to perform the soil classification through a machine learning technique through elastic wave velocity and electrical resistivity measured by FVRP. Field and laboratory tests are performed, and the measured values are used as input variables to classify silt sand, sand, silty clay, and clay-sand mixture layers. The accuracy of k-nearest neighbors (KNN), naive Bayes (NB), random forest (RF), and support vector machine (SVM), selected to perform classification and optimize the hyperparameters, is evaluated. The accuracies are calculated as 0.76, 0.91, 0.94, and 0.88 for KNN, NB, RF, and SVM algorithms, respectively. To increase the amount of data at each soil layer, the synthetic minority oversampling technique (SMOTE) and conditional tabular generative adversarial network (CTGAN) are applied to overcome imbalance in the dataset. The CTGAN provides improved accuracy in the KNN, NB, RF and SVM algorithms. The results demonstrate that the measured values by FVRP can classify soil layers through three kinds of data with machine learning algorithms.

Resolving CTGAN-based data imbalance for commercialization of public technology (공공기술 사업화를 위한 CTGAN 기반 데이터 불균형 해소)

  • Hwang, Chul-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.64-69
    • /
    • 2022
  • Commercialization of public technology is the transfer of government-led scientific and technological innovation and R&D results to the private sector, and is recognized as a key achievement driving economic growth. Therefore, in order to activate technology transfer, various machine learning methods are being studied to identify success factors or to match public technology with high commercialization potential and demanding companies. However, public technology commercialization data is in the form of a table and has a problem that machine learning performance is not high because it is in an imbalanced state with a large difference in success-failure ratio. In this paper, we present a method of utilizing CTGAN to resolve imbalances in public technology data in tabular form. In addition, to verify the effectiveness of the proposed method, a comparative experiment with SMOTE, a statistical approach, was performed using actual public technology commercialization data. In many experimental cases, it was confirmed that CTGAN reliably predicts public technology commercialization success cases.

Generating and Validating Synthetic Training Data for Predicting Bankruptcy of Individual Businesses

  • Hong, Dong-Suk;Baik, Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.228-233
    • /
    • 2021
  • In this study, we analyze the credit information (loan, delinquency information, etc.) of individual business owners to generate voluminous training data to establish a bankruptcy prediction model through a partial synthetic training technique. Furthermore, we evaluate the prediction performance of the newly generated data compared to the actual data. When using conditional tabular generative adversarial networks (CTGAN)-based training data generated by the experimental results (a logistic regression task), the recall is improved by 1.75 times compared to that obtained using the actual data. The probability that both the actual and generated data are sampled over an identical distribution is verified to be much higher than 80%. Providing artificial intelligence training data through data synthesis in the fields of credit rating and default risk prediction of individual businesses, which have not been relatively active in research, promotes further in-depth research efforts focused on utilizing such methods.

Automatic Generation of Machine Readable Context Annotations for SPARQL Results

  • Choi, Ji-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.1-10
    • /
    • 2016
  • In this paper, we propose an approach to generate machine readable context annotations for SPARQL Results. According to W3C Recommendations, the retrieved data from RDF or OWL data sources are represented in tabular form, in which each cell's data is described by only type and value. The simple query result form is generally useful, but it is not sufficient to explain the semantics of the data in query results. To explain the meaning of the data, appropriate annotations must be added to the query results. In this paper, we generate the annotations from the basic graph patterns in user's queries. We could also manipulate the original queries to complete the annotations. The generated annotations are represented using the RDFa syntax in our study. The RDFa expressions in HTML are machine-understandable. We believe that our work will improve the trustworthiness of query results and contribute to distribute the data to meet the vision of the Semantic Web.