• Title/Summary/Keyword: Training Data Set

Search Result 814, Processing Time 0.024 seconds

MOTIF BASED PROTEIN FUNCTION ANALYSIS USING DATA MINING

  • Lee, Bum-Ju;Lee, Heon-Gyu;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.812-815
    • /
    • 2006
  • Proteins are essential agents for controlling, effecting and modulating cellular functions, and proteins with similar sequences have diverged from a common ancestral gene, and have similar structures and functions. Function prediction of unknown proteins remains one of the most challenging problems in bioinformatics. Recently, various computational approaches have been developed for identification of short sequences that are conserved within a family of closely related protein sequence. Protein function is often correlated with highly conserved motifs. Motif is the smallest unit of protein structure and function, and intends to make core part among protein structural and functional components. Therefore, prediction methods using data mining or machine learning have been developed. In this paper, we describe an approach for protein function prediction of motif-based models using data mining. Our work consists of three phrases. We make training and test data set and construct classifier using a training set. Also, through experiments, we evaluate our classifier with other classifiers in point of the accuracy of resulting classification.

  • PDF

Estimating pile setup parameter using XGBoost-based optimized models

  • Xigang Du;Ximeng Ma;Chenxi Dong;Mehrdad Sattari Nikkhoo
    • Geomechanics and Engineering
    • /
    • v.36 no.3
    • /
    • pp.259-276
    • /
    • 2024
  • The undrained shear strength is widely acknowledged as a fundamental mechanical property of soil and is considered a critical engineering parameter. In recent years, researchers have employed various methodologies to evaluate the shear strength of soil under undrained conditions. These methods encompass both numerical analyses and empirical techniques, such as the cone penetration test (CPT), to gain insights into the properties and behavior of soil. However, several of these methods rely on correlation assumptions, which can lead to inconsistent accuracy and precision. The study involved the development of innovative methods using extreme gradient boosting (XGB) to predict the pile set-up component "A" based on two distinct data sets. The first data set includes average modified cone point bearing capacity (qt), average wall friction (fs), and effective vertical stress (σvo), while the second data set comprises plasticity index (PI), soil undrained shear cohesion (Su), and the over consolidation ratio (OCR). These data sets were utilized to develop XGBoost-based methods for predicting the pile set-up component "A". To optimize the internal hyperparameters of the XGBoost model, four optimization algorithms were employed: Particle Swarm Optimization (PSO), Social Spider Optimization (SSO), Arithmetic Optimization Algorithm (AOA), and Sine Cosine Optimization Algorithm (SCOA). The results from the first data set indicate that the XGBoost model optimized using the Arithmetic Optimization Algorithm (XGB - AOA) achieved the highest accuracy, with R2 values of 0.9962 for the training part and 0.9807 for the testing part. The performance of the developed models was further evaluated using the RMSE, MAE, and VAF indices. The results revealed that the XGBoost model optimized using XGBoost - AOA outperformed other models in terms of accuracy, with RMSE, MAE, and VAF values of 0.0078, 0.0015, and 99.6189 for the training part and 0.0141, 0.0112, and 98.0394 for the testing part, respectively. These findings suggest that XGBoost - AOA is the most accurate model for predicting the pile set-up component.

A Study on the Insolvency Prediction Model for Korean Shipping Companies

  • Myoung-Hee Kim
    • Journal of Navigation and Port Research
    • /
    • v.48 no.2
    • /
    • pp.109-115
    • /
    • 2024
  • To develop a shipping company insolvency prediction model, we sampled shipping companies that closed between 2005 and 2023. In addition, a closed company and a normal company with similar asset size were selected as a paired sample. For this study, data of a total of 82 companies, including 42 closed companies and 42 general companies, were obtained. These data were randomly divided into a training set (2/3 of data) and a testing set (1/3 of data). Training data were used to develop the model while test data were used to measure the accuracy of the model. In this study, a prediction model for Korean shipping insolvency was developed using financial ratio variables frequently used in previous studies. First, using the LASSO technique, main variables out of 24 independent variables were reduced to 9. Next, we set insolvent companies to 1 and normal companies to 0 and fitted logistic regression, LDA and QDA model. As a result, the accuracy of the prediction model was 82.14% for the QDA model, 78.57% for the logistic regression model, and 75.00% for the LDA model. In addition, variables 'Current ratio', 'Interest expenses to sales', 'Total assets turnover', and 'Operating income to sales' were analyzed as major variables affecting corporate insolvency.

Characterization of Korean Archaeological Artifacts by Neutron Activation Analysis (II). Multivariate Classification of Korean Ancient Glass Pieces (중성자 방사화분석에 의한 한국산 고고학적 유물의 특성화 연구 (II). 다변량 해석법에 의한 고대 유리제품의 분류 연구)

  • Chul Lee;Oh Cheun Kwun;Ihn Chong Lee;Nak Bae Kim
    • Journal of the Korean Chemical Society
    • /
    • v.31 no.6
    • /
    • pp.567-575
    • /
    • 1987
  • Fourty five ancient Korean glass pieces have been determined for 19 elements such as Ag, As, Br, Ce, Co, Cr, Eu, Fe, Hf, K, La, Lu, Na, Ru, Sb, Sc, Sm, Th and Zn, and for one such as Pb by instrumental neutron activation analysis and by atomic absorption spectrometry, respectively. The multivariate data have been analyzed for the relation among elemental contents through the variance-covariance matrix. The data have been further analyzed by a principal component mapping method. As the results training set of 5 class have been chosen, based on the spread of sample points in an eigen vector plot and archaeological data. The 5 training set consisting of 36 species and a test set consisting of 9 species bave finally been analyzed for the assignment to certain classes or outliers through the statistical isolinear multiple component analysis (SIMCA). The results have showed the whole species for 5 training set and 3 species in the test set are assigned appropriately and these are in accord with the results by principal component mapping.

  • PDF

Prediction of rebound in shotcrete using deep bi-directional LSTM

  • Suzen, Ahmet A.;Cakiroglu, Melda A.
    • Computers and Concrete
    • /
    • v.24 no.6
    • /
    • pp.555-560
    • /
    • 2019
  • During the application of shotcrete, a part of the concrete bounces back after hitting to the surface, the reinforcement or previously sprayed concrete. This rebound material is definitely not added to the mixture and considered as waste. In this study, a deep neural network model was developed to predict the rebound material during shotcrete application. The factors affecting rebound and the datasets of these parameters were obtained from previous experiments. The Long Short-Term Memory (LSTM) architecture of the proposed deep neural network model was used in accordance with this data set. In the development of the proposed four-tier prediction model, the dataset was divided into 90% training and 10% test. The deep neural network was modeled with 11 dependents 1 independent data by determining the most appropriate hyper parameter values for prediction. Accuracy and error performance in success performance of LSTM model were evaluated over MSE and RMSE. A success of 93.2% was achieved at the end of training of the model and a success of 85.6% in the test. There was a difference of 7.6% between training and test. In the following stage, it is aimed to increase the success rate of the model by increasing the number of data in the data set with synthetic and experimental data. In addition, it is thought that prediction of the amount of rebound during dry-mix shotcrete application will provide economic gain as well as contributing to environmental protection.

Comparison of Machine Learning-Based Radioisotope Identifiers for Plastic Scintillation Detector

  • Jeon, Byoungil;Kim, Jongyul;Yu, Yonggyun;Moon, Myungkook
    • Journal of Radiation Protection and Research
    • /
    • v.46 no.4
    • /
    • pp.204-212
    • /
    • 2021
  • Background: Identification of radioisotopes for plastic scintillation detectors is challenging because their spectra have poor energy resolutions and lack photo peaks. To overcome this weakness, many researchers have conducted radioisotope identification studies using machine learning algorithms; however, the effect of data normalization on radioisotope identification has not been addressed yet. Furthermore, studies on machine learning-based radioisotope identifiers for plastic scintillation detectors are limited. Materials and Methods: In this study, machine learning-based radioisotope identifiers were implemented, and their performances according to data normalization methods were compared. Eight classes of radioisotopes consisting of combinations of 22Na, 60Co, and 137Cs, and the background, were defined. The training set was generated by the random sampling technique based on probabilistic density functions acquired by experiments and simulations, and test set was acquired by experiments. Support vector machine (SVM), artificial neural network (ANN), and convolutional neural network (CNN) were implemented as radioisotope identifiers with six data normalization methods, and trained using the generated training set. Results and Discussion: The implemented identifiers were evaluated by test sets acquired by experiments with and without gain shifts to confirm the robustness of the identifiers against the gain shift effect. Among the three machine learning-based radioisotope identifiers, prediction accuracy followed the order SVM > ANN > CNN, while the training time followed the order SVM > ANN > CNN. Conclusion: The prediction accuracy for the combined test sets was highest with the SVM. The CNN exhibited a minimum variation in prediction accuracy for each class, even though it had the lowest prediction accuracy for the combined test sets among three identifiers. The SVM exhibited the highest prediction accuracy for the combined test sets, and its training time was the shortest among three identifiers.

Classification of Korean Ancient Glass Pieces by Pattern Recognition Method (패턴인지법에 의한 한국산 고대 유리제품의 분류)

  • Lee Chul;Czae Myung-Zoon;Kim Seungwon;Kang Hyung Tae;Lee Jong Du
    • Journal of the Korean Chemical Society
    • /
    • v.36 no.1
    • /
    • pp.113-124
    • /
    • 1992
  • The pattern recognition methods of chemometrics have been applied to multivariate data, for which ninety four Korean ancient glass pieces have been determined for 12 elements by neutron activation analysis. For the purpose, principal component analysis and non-linear mapping have been used as the unsupervised learning methods. As the result, the glass samples have been classified into 6 classes. The SIMCA (statistical isolinear multiple component analysis), adopted as a supervised learning method, has been applied to the 6 training set and the test set. The results of the 6 training set were in accord with the results by principal component analysis and non-linear mapping. For test set, 17 of 33 samples were each allocated to one of the 6 training set.

  • PDF

SVM-Based Incremental Learning Algorithm for Large-Scale Data Stream in Cloud Computing

  • Wang, Ning;Yang, Yang;Feng, Liyuan;Mi, Zhenqiang;Meng, Kun;Ji, Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3378-3393
    • /
    • 2014
  • We have witnessed the rapid development of information technology in recent years. One of the key phenomena is the fast, near-exponential increase of data. Consequently, most of the traditional data classification methods fail to meet the dynamic and real-time demands of today's data processing and analyzing needs--especially for continuous data streams. This paper proposes an improved incremental learning algorithm for a large-scale data stream, which is based on SVM (Support Vector Machine) and is named DS-IILS. The DS-IILS takes the load condition of the entire system and the node performance into consideration to improve efficiency. The threshold of the distance to the optimal separating hyperplane is given in the DS-IILS algorithm. The samples of the history sample set and the incremental sample set that are within the scope of the threshold are all reserved. These reserved samples are treated as the training sample set. To design a more accurate classifier, the effects of the data volumes of the history sample set and the incremental sample set are handled by weighted processing. Finally, the algorithm is implemented in a cloud computing system and is applied to study user behaviors. The results of the experiment are provided and compared with other incremental learning algorithms. The results show that the DS-IILS can improve training efficiency and guarantee relatively high classification accuracy at the same time, which is consistent with the theoretical analysis.

An Improved Deep Learning Method for Animal Images (동물 이미지를 위한 향상된 딥러닝 학습)

  • Wang, Guangxing;Shin, Seong-Yoon;Shin, Kwang-Weong;Lee, Hyun-Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.123-124
    • /
    • 2019
  • This paper proposes an improved deep learning method based on small data sets for animal image classification. Firstly, we use a CNN to build a training model for small data sets, and use data augmentation to expand the data samples of the training set. Secondly, using the pre-trained network on large-scale datasets, such as VGG16, the bottleneck features in the small dataset are extracted and to be stored in two NumPy files as new training datasets and test datasets. Finally, training a fully connected network with the new datasets. In this paper, we use Kaggle famous Dogs vs Cats dataset as the experimental dataset, which is a two-category classification dataset.

  • PDF

A Study of Optimal Ratio of Data Partition for Neuro-Fuzzy-Based Software Reliability Prediction (뉴로-퍼지 소프트웨어 신뢰성 예측에 대한 최적의 데이터 분할비율에 관한 연구)

  • Lee, Sang-Un
    • The KIPS Transactions:PartD
    • /
    • v.8D no.2
    • /
    • pp.175-180
    • /
    • 2001
  • This paper presents the optimal fraction of validation set to obtain a prediction accuracy of software failure count or failure time in the future by a neuro-fuzzy system. Given a fixed amount of training data, the most popular effective approach to avoiding underfitting and overfitting is early stopping, and hence getting optimal generalization. But there is unresolved practical issues : How many data do you assign to the training and validation set\ulcorner Rules of thumb abound, the solution is acquired by trial-and-error and we spend long time in this method. For the sake of optimal fraction of validation set, the variant specific fraction for the validation set be provided. It shows that minimal fraction of the validation data set is sufficient to achieve good next-step prediction. This result can be considered as a practical guideline in a prediction of software reliability by neuro-fuzzy system.

  • PDF