• Title/Summary/Keyword: ML classifier

Search Result 25, Processing Time 0.03 seconds

Cognitive Impairment Prediction Model Using AutoML and Lifelog

  • Hyunchul Choi;Chiho Yoon;Sae Bom Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.53-63
    • /
    • 2023
  • This study developed a cognitive impairment predictive model as one of the screening tests for preventing dementia in the elderly by using Automated Machine Learning(AutoML). We used 'Wearable lifelog data for high-risk dementia patients' of National Information Society Agency, then conducted using PyCaret 3.0.0 in the Google Colaboratory environment. This study analysis steps are as follows; first, selecting five models demonstrating excellent classification performance for the model development and lifelog data analysis. Next, using ensemble learning to integrate these models and assess their performance. It was found that Voting Classifier, Gradient Boosting Classifier, Extreme Gradient Boosting, Light Gradient Boosting Machine, Extra Trees Classifier, and Random Forest Classifier model showed high predictive performance in that order. This study findings, furthermore, emphasized on the the crucial importance of 'Average respiration per minute during sleep' and 'Average heart rate per minute during sleep' as the most critical feature variables for accurate predictions. Finally, these study results suggest that consideration of the possibility of using machine learning and lifelog as a means to more effectively manage and prevent cognitive impairment in the elderly.

A Minimum-Error-Rate Training Algorithm for Pattern Classifiers and Its Application to the Predictive Neural Network Models (패턴분류기를 위한 최소오차율 학습알고리즘과 예측신경회로망모델에의 적용)

  • 나경민;임재열;안수길
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.108-115
    • /
    • 1994
  • Most pattern classifiers have been designed based on the ML (Maximum Likelihood) training algorithm which is simple and relatively powerful. The ML training is an efficient algorithm to individually estimate the model parameters of each class under the assumption that all class models in a classifier are statistically independent. That assumption, however, is not valid in many real situations, which degrades the performance of the classifier. In this paper, we propose a minimum-error-rate training algorithm based on the MAP (Maximum a Posteriori) approach. The algorithm regards the normalized outputs of the classifier as estimates of the a posteriori probability, and tries to maximize those estimates. According to Bayes decision theory, the proposed algorithm satisfies the condition of minimum-error-rate classificatin. We apply this algorithm to NPM (Neural Prediction Model) for speech recognition, and derive new disrminative training algorithms. Experimental results on ten Korean digits recognition have shown the reduction of 37.5% of the number of recognition errors.

  • PDF

Academic Registration Text Classification Using Machine Learning

  • Alhawas, Mohammed S;Almurayziq, Tariq S
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.93-96
    • /
    • 2022
  • Natural language processing (NLP) is utilized to understand a natural text. Text analysis systems use natural language algorithms to find the meaning of large amounts of text. Text classification represents a basic task of NLP with a wide range of applications such as topic labeling, sentiment analysis, spam detection, and intent detection. The algorithm can transform user's unstructured thoughts into more structured data. In this work, a text classifier has been developed that uses academic admission and registration texts as input, analyzes its content, and then automatically assigns relevant tags such as admission, graduate school, and registration. In this work, the well-known algorithms support vector machine SVM and K-nearest neighbor (kNN) algorithms are used to develop the above-mentioned classifier. The obtained results showed that the SVM classifier outperformed the kNN classifier with an overall accuracy of 98.9%. in addition, the mean absolute error of SVM was 0.0064 while it was 0.0098 for kNN classifier. Based on the obtained results, the SVM is used to implement the academic text classification in this work.

Fast Conditional Independence-based Bayesian Classifier

  • Junior, Estevam R. Hruschka;Galvao, Sebastian D. C. de O.
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.162-176
    • /
    • 2007
  • Machine Learning (ML) has become very popular within Data Mining (KDD) and Artificial Intelligence (AI) research and their applications. In the ML and KDD contexts, two main approaches can be used for inducing a Bayesian Network (BN) from data, namely, Conditional Independence (CI) and the Heuristic Search (HS). When a BN is induced for classification purposes (Bayesian Classifier - BC), it is possible to impose some specific constraints aiming at increasing the computational efficiency. In this paper a new CI based approach to induce BCs from data is proposed and two algorithms are presented. Such approach is based on the Markov Blanket concept in order to impose some constraints and optimize the traditional PC learning algorithm. Experiments performed with the ALARM, as well as other six UCI and three artificial domains revealed that the proposed approach tends to execute fewer comparison tests than the traditional PC. The experiments also show that the proposed algorithms produce competitive classification rates when compared with both, PC and Naive Bayes.

Implementation of ML Algorithm for Mung Bean Classification using Smart Phone

  • Almutairi, Mubarak;Mutiullah, Mutiullah;Munir, Kashif;Hashmi, Shadab Alam
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.89-96
    • /
    • 2021
  • This work is an extension of my work presented a robust and economically efficient method for the Discrimination of four Mung-Beans [1] varieties based on quantitative parameters. Due to the advancement of technology, users try to find the solutions to their daily life problems using smartphones but still for computing power and memory. Hence, there is a need to find the best classifier to classify the Mung-Beans using already suggested features in previous work with minimum memory requirements and computational power. To achieve this study's goal, we take the experiments on various supervised classifiers with simple architecture and calculations and give the robust performance on the most relevant 10 suggested features selected by Fisher Co-efficient, Probability of Error, Mutual Information, and wavelet features. After the analysis, we replace the Artificial Neural Network and Deep learning with a classifier that gives approximately the same classification results as the above classifier but is efficient in terms of resources and time complexity. This classifier is easily implemented in the smartphone environment.

TinyML Gamma Radiation Classifier

  • Moez Altayeb;Marco Zennaro;Ermanno Pietrosemoli
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.443-451
    • /
    • 2023
  • Machine Learning has introduced many solutions in data science, but its application in IoT faces significant challenges, due to the limitations in memory size and processing capability of constrained devices. In this paper we design an automatic gamma radiation detection and identification embedded system that exploits the power of TinyML in a SiPM micro radiation sensor leveraging the Edge Impulse platform. The model is trained using real gamma source data enhanced by software augmentation algorithms. Tests show high accuracy in real time processing. This design has promising applications in general-purpose radiation detection and identification, nuclear safety, medical diagnosis and it is also amenable for deployment in small satellites.

Predictive Analysis of Financial Fraud Detection using Azure and Spark ML

  • Priyanka Purushu;Niklas Melcher;Bhagyashree Bhagwat;Jongwook Woo
    • Asia pacific journal of information systems
    • /
    • v.28 no.4
    • /
    • pp.308-319
    • /
    • 2018
  • This paper aims at providing valuable insights on Financial Fraud Detection on a mobile money transactional activity. We have predicted and classified the transaction as normal or fraud with a small sample and massive data set using Azure and Spark ML, which are traditional systems and Big Data respectively. Experimenting with sample dataset in Azure, we found that the Decision Forest model is the most accurate to proceed in terms of the recall value. For the massive data set using Spark ML, it is found that the Random Forest classifier algorithm of the classification model proves to be the best algorithm. It is presented that the Spark cluster gets much faster to build and evaluate models as adding more servers to the cluster with the same accuracy, which proves that the large scale data set can be predictable using Big Data platform. Finally, we reached a recall score with 0.73, which implies a satisfying prediction quality in predicting fraudulent transactions.

Design of RBF Neural Networks Based on Recursive Weighted Least Square Estimation for Processing Massive Meteorological Radar Data and Its Application (방대한 기상 레이더 데이터의 원할한 처리를 위한 순환 가중최소자승법 기반 RBF 뉴럴 네트워크 설계 및 응용)

  • Kang, Jeon-Seong;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.1
    • /
    • pp.99-106
    • /
    • 2015
  • In this study, we propose Radial basis function Neural Network(RBFNN) using Recursive Weighted Least Square Estimation(RWLSE) to effectively deal with big data class meteorological radar data. In the condition part of the RBFNN, Fuzzy C-Means(FCM) clustering is used to obtain fitness values taking into account characteristics of input data, and connection weights are defined as linear polynomial function in the conclusion part. The coefficients of the polynomial function are estimated by using RWLSE in order to cope with big data. As recursive learning technique, RWLSE which is based on WLSE is carried out to efficiently process big data. This study is experimented with both widely used some Machine Learning (ML) dataset and big data obtained from meteorological radar to evaluate the performance of the proposed classifier. The meteorological radar data as big data consists of precipitation echo and non-precipitation echo, and the proposed classifier is used to efficiently classify these echoes.

k-Nearest Neighbor Classifier using Local Values of k (지역적 k값을 사용한 k-Nearest Neighbor Classifier)

  • 이상훈;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.193-195
    • /
    • 2003
  • 본 논문에서는 k-Nearest Neighbor(k-NN) 알고리즘을 최적화하기 위해 지역적으로 다른 k(고려할 neighbor의 개수)를 사용하는 새로운 방법을 제안한다. 인스턴스 공간(instance space)에서 노이즈(noise)의 분포가 지역적(local)으로 다를 경우, 각 지점에서 고려해야 할 최적의 이웃 인스턴스(neighbor)의 수는 해당 지점에서의 국부적인 노이즈 분포에 따라 다르다. 그러나 기존의 방법은 전체 인스턴스 공간에 대해 동일한 k를 사용하기 때문에 이러한 인스턴스 공간의 지역적인 특성을 고려하지 못한다. 따라서 본 논문에서는 지역적으로 분포가 다른 노이즈 문제를 해결하기 위해 인스턴스 공간을 여러 개의 부분으로 나누고, 각 부분에 최적화된 k의 값을 사용하여 kNN을 수행하는 새로운 방법인 Local-k Nearest Neighbor 알고리즘(LkNN Algorithm)을 제안한다. LkNN을 통해 생성된 k의 집합은 인스턴스 공간의 각 부분을 대표하는 값으로, 해당 지역의 인스턴스가 고려해야 할 이웃(neighbor)의 수를 결정지어준다. 제안한 알고리즘에 적합한 데이터의 도메인(domain)과 그것의 향상된 성능은 UCI ML Data Repository 데이터를 사용한 실험을 통해 검증하였다.

  • PDF

A Study on Performance of ML Algorithms and Feature Extraction to detect Malware (멀웨어 검출을 위한 기계학습 알고리즘과 특징 추출에 대한 성능연구)

  • Ahn, Tae-Hyun;Park, Jae-Gyun;Kwon, Young-Man
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.211-216
    • /
    • 2018
  • In this paper, we studied the way that classify whether unknown PE file is malware or not. In the classification problem of malware detection domain, feature extraction and classifier are important. For that purpose, we studied what the feature is good for classifier and the which classifier is good for the selected feature. So, we try to find the good combination of feature and classifier for detecting malware. For it, we did experiments at two step. In step one, we compared the accuracy of features using Opcode only, Win. API only, the one with both. We founded that the feature, Opcode and Win. API, is better than others. In step two, we compared AUC value of classifiers, Bernoulli Naïve Bayes, K-nearest neighbor, Support Vector Machine and Decision Tree. We founded that Decision Tree is better than others.