• Title/Summary/Keyword: Tabular Data

Search Result 51, Processing Time 0.028 seconds

Recent deep learning methods for tabular data

  • Yejin Hwang;Jongwoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.215-226
    • /
    • 2023
  • Deep learning has made great strides in the field of unstructured data such as text, images, and audio. However, in the case of tabular data analysis, machine learning algorithms such as ensemble methods are still better than deep learning. To keep up with the performance of machine learning algorithms with good predictive power, several deep learning methods for tabular data have been proposed recently. In this paper, we review the latest deep learning models for tabular data and compare the performances of these models using several datasets. In addition, we also compare the latest boosting methods to these deep learning methods and suggest the guidelines to the users, who analyze tabular datasets. In regression, machine learning methods are better than deep learning methods. But for the classification problems, deep learning methods perform better than the machine learning methods in some cases.

Incorporating BERT-based NLP and Transformer for An Ensemble Model and its Application to Personal Credit Prediction

  • Sophot Ky;Ju-Hong Lee;Kwangtek Na
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.9-15
    • /
    • 2024
  • Tree-based algorithms have been the dominant methods used build a prediction model for tabular data. This also includes personal credit data. However, they are limited to compatibility with categorical and numerical data only, and also do not capture information of the relationship between other features. In this work, we proposed an ensemble model using the Transformer architecture that includes text features and harness the self-attention mechanism to tackle the feature relationships limitation. We describe a text formatter module, that converts the original tabular data into sentence data that is fed into FinBERT along with other text features. Furthermore, we employed FT-Transformer that train with the original tabular data. We evaluate this multi-modal approach with two popular tree-based algorithms known as, Random Forest and Extreme Gradient Boosting, XGBoost and TabTransformer. Our proposed method shows superior Default Recall, F1 score and AUC results across two public data sets. Our results are significant for financial institutions to reduce the risk of financial loss regarding defaulters.

Classification of Tabular Data using High-Dimensional Mapping and Deep Learning Network (고차원 매핑기법과 딥러닝 네트워크를 통한 정형데이터의 분류)

  • Kyeong-Taek Kim;Won-Du Chang
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • Deep learning has recently demonstrated conspicuous efficacy across diverse domains than traditional machine learning techniques, as the most popular approach for pattern recognition. The classification problems for tabular data, however, are remain for the area of traditional machine learning. This paper introduces a novel network module designed to tabular data into high-dimensional tensors. The module is integrated into conventional deep learning networks and subsequently applied to the classification of structured data. The proposed method undergoes training and validation on four datasets, culminating in an average accuracy of 90.22%. Notably, this performance surpasses that of the contemporary deep learning model, TabNet, by 2.55%p. The proposed approach acquires significance by virtue of its capacity to harness diverse network architectures, renowned for their superior performance in the domain of computer vision, for the analysis of tabular data.

Test Dataset for validating the meaning of Table Machine Reading Language Model (표 기계독해 언어 모형의 의미 검증을 위한 테스트 데이터셋)

  • YU, Jae-Min;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.164-167
    • /
    • 2022
  • In table Machine comprehension, the knowledge required for language models or the structural form of tables changes depending on the domain, showing a greater performance degradation compared to text data. In this paper, we propose a pre-learning data construction method and an adversarial learning method through meaningful tabular data selection for constructing a pre-learning table language model robust to these domain changes in table machine reading. In order to detect tabular data sed for decoration of web documents without structural information from the extracted table data, a rule through heuristic was defined to identify head data and select table data was applied. An adversarial learning method between tabular data and infobax data with knowledge information about entities was applied. When the data was refined compared to when it was trained with the existing unrefined data, F1 3.45 and EM 4.14 increased in the KorQuAD table data, and F1 19.38, EM 4.22 compared to when the data was not refined in the Spec table QA data showed increased performance.

  • PDF

Korean TableQA: Structured data question answering based on span prediction style with S3-NET

  • Park, Cheoneum;Kim, Myungji;Park, Soyoon;Lim, Seungyoung;Lee, Jooyoul;Lee, Changki
    • ETRI Journal
    • /
    • v.42 no.6
    • /
    • pp.899-911
    • /
    • 2020
  • The data in tables are accurate and rich in information, which facilitates the performance of information extraction and question answering (QA) tasks. TableQA, which is based on tables, solves problems by understanding the table structure and searching for answers to questions. In this paper, we introduce both novice and intermediate Korean TableQA tasks that involve deducing the answer to a question from structured tabular data and using it to build a question answering pair. To solve Korean TableQA tasks, we use S3-NET, which has shown a good performance in machine reading comprehension (MRC), and propose a method of converting structured tabular data into a record format suitable for MRC. Our experimental results show that the proposed method outperforms a baseline in both the novice task (exact match (EM) 96.48% and F1 97.06%) and intermediate task (EM 99.30% and F1 99.55%).

Investigations into Coarsening Continuous Variables

  • Jeong, Dong-Myeong;Kim, Jay-J.
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.325-333
    • /
    • 2010
  • Protection against disclosure of survey respondents' identifiable and/or sensitive information is a prerequisite for statistical agencies that release microdata files from their sample surveys. Coarsening is one of popular methods for protecting the confidentiality of the data. Grouped data can be released in the form of microdata or tabular data. Instead of releasing the data in a tabular form only, having microdata available to the public with interval codes with their representative values greatly enhances the utility of the data. It allows the researchers to compute covariance between the variables and build statistical models or to run a variety of statistical tests on the data. It may be conjectured that the variance of the interval data is lower that of the ungrouped data in the sense that the coarsened data do not have the within interval variance. This conjecture will be investigated using the uniform and triangular distributions. Traditionally, midpoint is used to represent all the values in an interval. This approach implicitly assumes that the data is uniformly distributed within each interval. However, this assumption may not hold, especially in the last interval of the economic data. In this paper, we will use three distributional assumptions - uniform, Pareto and lognormal distribution - in the last interval and use either midpoint or median for other intervals for wage and food costs of the Statistics Korea's 2006 Household Income and Expenditure Survey(HIES) data and compare these approaches in terms of the first two moments.

Comparing the Performance of a Deep Learning Model (TabPFN) for Predicting River Algal Blooms with Varying Data Composition (데이터 구성에 따른 하천 조류 예측 딥러닝 모형 (TabPFN) 성능 비교)

  • Hyunseok Yang;Jungsu Park
    • Journal of Wetlands Research
    • /
    • v.26 no.3
    • /
    • pp.197-203
    • /
    • 2024
  • The algal blooms in rivers can negatively affect water source management and water treatment processes, necessitating continuous management. In this study, a multi-classification model was developed to predict the concentration of chlorophyll-a (chl-a), one of the key indicators of algal blooms, using Tabular Prior Fitted Networks (TabPFN), a novel deep learning algorithm known for its relatively superior performance on small tabular datasets. The model was developed using daily observation data collected at Buyeo water quality monitoring station from January 1, 2014, to December 31, 2022. The collected data were averaged to construct input data sets with measurement frequencies of 1 day, 3 days, 6 days, 12 days. The performance comparison of the four models, constructed with input data on observation frequencies of 1 day, 3 days, 6 days, and 12 days, showed that the model exhibits stable performance even when the measurement frequency is longer and the number of observations is smaller. The macro average for each model were analyzed as follows: Precision was 0.77, 0.76, 0.83, 0.84; Recall was 0.63, 0.65, 0.66, 0.74; F1-score was 0.67, 0.69, 0.71, 0.78. For the weighted average, Precision was 0.76, 0.77, 0.81, 0.84; Recall was 0.76, 0.78, 0.81, 0.85; F1-score was 0.74, 0.77, 0.80, 0.84. This study demonstrates that the chl-a prediction model constructed using TabPFN exhibits stable performance even with small-scale input data, verifying the feasibility of its application in fields where the input data required for model construction is limited.

Development of RESTful Web Service for Loading Data focusing on Daily Meteorological Data (데이터 로딩 자동화를 위한 RESTful 웹서비스 개발 - 일별 기상자료 처리를 중심으로 -)

  • Kim, Taegon;Lee, JeongJae;Nam, Won-Ho;Suh, Kyo
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.6
    • /
    • pp.93-102
    • /
    • 2014
  • Generally data loading is a laborous job to develop models. Meteorological data is basic input data for hydrological models, it is provided through websites of Korea Meteorological Administration (KMA). The website of KMA provides daily meteorological observation data with tabular format classified by years, items, stations. It is cumbersome to manipulate tabular format for model inputs such as time series and multi-item or multi-station data. The provider oriented services which broadcast restricted formed information have caused inconvenient processes. Tim O'Reilly introduces "Web 2.0" which focuses on providing a service based on data. The top ranked IT companies such as google, yahoo, daum, and naver provide customer oriented services with Open API (Application Programming Interface). A RESTful web service, typical implementation for Open API, consists URI request and HTTP response which are simple and light weight protocol than SOAP (Simple Object Access Protocol). The aim of this study is to develop a web-based service that helps loading data for human use instead of machine use. In this study, the developed RESTful web service provides Open API for manipulating meteorological data. The proposed Open API can easily access from spreadsheet programs, web browsers, and various programming environments.

Study of oversampling algorithms for soil classifications by field velocity resistivity probe

  • Lee, Jong-Sub;Park, Junghee;Kim, Jongchan;Yoon, Hyung-Koo
    • Geomechanics and Engineering
    • /
    • v.30 no.3
    • /
    • pp.247-258
    • /
    • 2022
  • A field velocity resistivity probe (FVRP) can measure compressional waves, shear waves and electrical resistivity in boreholes. The objective of this study is to perform the soil classification through a machine learning technique through elastic wave velocity and electrical resistivity measured by FVRP. Field and laboratory tests are performed, and the measured values are used as input variables to classify silt sand, sand, silty clay, and clay-sand mixture layers. The accuracy of k-nearest neighbors (KNN), naive Bayes (NB), random forest (RF), and support vector machine (SVM), selected to perform classification and optimize the hyperparameters, is evaluated. The accuracies are calculated as 0.76, 0.91, 0.94, and 0.88 for KNN, NB, RF, and SVM algorithms, respectively. To increase the amount of data at each soil layer, the synthetic minority oversampling technique (SMOTE) and conditional tabular generative adversarial network (CTGAN) are applied to overcome imbalance in the dataset. The CTGAN provides improved accuracy in the KNN, NB, RF and SVM algorithms. The results demonstrate that the measured values by FVRP can classify soil layers through three kinds of data with machine learning algorithms.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.