• Title/Summary/Keyword: Distributed Machine Learning

Search Result 127, Processing Time 0.021 seconds

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Hierarchical IoT Edge Resource Allocation and Management Techniques based on Synthetic Neural Networks in Distributed AIoT Environments (분산 AIoT 환경에서 합성곱신경망 기반 계층적 IoT Edge 자원 할당 및 관리 기법)

  • Yoon-Su Jeong
    • Advanced Industrial SCIence
    • /
    • v.2 no.3
    • /
    • pp.8-14
    • /
    • 2023
  • The majority of IoT devices already employ AIoT, however there are still numerous issues that need to be resolved before AI applications can be deployed. In order to more effectively distribute IoT edge resources, this paper propose a machine learning-based approach to managing IoT edge resources. The suggested method constantly improves the allocation of IoT resources by identifying IoT edge resource trends using machine learning. IoT resources that have been optimized make use of machine learning convolution to reliably sustain IoT edge resources that are always changing. By storing each machine learning-based IoT edge resource as a hash value alongside the resource of the previous pattern, the suggested approach effectively verifies the resource as an attack pattern in a distributed AIoT context. Experimental results evaluate energy efficiency in three different test scenarios to verify the integrity of IoT Edge resources to see if they work well in complex environments with heterogeneous computational hardware.

Comparison of Scala and R for Machine Learning in Spark (스파크에서 스칼라와 R을 이용한 머신러닝의 비교)

  • Woo-Seok Ryu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.85-90
    • /
    • 2023
  • Data analysis methodology in the healthcare field is shifting from traditional statistics-oriented research methods to predictive research using machine learning. In this study, we survey various machine learning tools, and compare several programming models, which utilize R and Spark, for applying R, a statistical tool widely used in the health care field, to machine learning. In addition, we compare the performance of linear regression model using scala, which is the basic languages of Spark and R. As a result of the experiment, the learning execution time when using SparkR increased by 10 to 20% compared to Scala. Considering the presented performance degradation, SparkR's distributed processing was confirmed as useful in R as the traditional statistical analysis tool that could be used as it is.

Privacy-Preserving in the Context of Data Mining and Deep Learning

  • Altalhi, Amjaad;AL-Saedi, Maram;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.137-142
    • /
    • 2021
  • Machine-learning systems have proven their worth in various industries, including healthcare and banking, by assisting in the extraction of valuable inferences. Information in these crucial sectors is traditionally stored in databases distributed across multiple environments, making accessing and extracting data from them a tough job. To this issue, we must add that these data sources contain sensitive information, implying that the data cannot be shared outside of the head. Using cryptographic techniques, Privacy-Preserving Machine Learning (PPML) helps solve this challenge, enabling information discovery while maintaining data privacy. In this paper, we talk about how to keep your data mining private. Because Data mining has a wide variety of uses, including business intelligence, medical diagnostic systems, image processing, web search, and scientific discoveries, and we discuss privacy-preserving in deep learning because deep learning (DL) exhibits exceptional exactitude in picture detection, Speech recognition, and natural language processing recognition as when compared to other fields of machine learning so that it detects the existence of any error that may occur to the data or access to systems and add data by unauthorized persons.

A Study on Patent Literature Classification Using Distributed Representation of Technical Terms (기술용어 분산표현을 활용한 특허문헌 분류에 관한 연구)

  • Choi, Yunsoo;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.53 no.2
    • /
    • pp.179-199
    • /
    • 2019
  • In this paper, we propose optimal methodologies for classifying patent literature by examining various feature extraction methods, machine learning and deep learning models, and provide optimal performance through experiments. We compared the traditional BoW method and a distributed representation method (word embedding vector) as a feature extraction, and compared the morphological analysis and multi gram as the method of constructing the document collection. In addition, classification performance was verified using traditional machine learning model and deep learning model. Experimental results show that the best performance is achieved when we apply the deep learning model with distributed representation and morphological analysis based feature extraction. In Section, Class and Subclass classification experiments, We improved the performance by 5.71%, 18.84% and 21.53%, respectively, compared with traditional classification methods.

Handling Method of Imbalance Data for Machine Learning : Focused on Sampling (머신러닝을 위한 불균형 데이터 처리 방법 : 샘플링을 위주로)

  • Lee, Kyunam;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.567-577
    • /
    • 2019
  • Recently, more and more attempts have been made to solve the problems faced by academia and industry through machine learning. Accordingly, various attempts are being made to solve non-general situations through machine learning, such as deviance, fraud detection and disability detection. A variety of attempts have been made to resolve the non-normal situation in which data is distributed disproportionately, generally resulting in errors. In this paper, we propose handling method of imbalance data for machine learning. The proposed method to such problem of an imbalance in data by verifying that the population distribution of major class is well extracted. Performance Evaluations have proven the proposed method to be better than the existing methods.

Prediction of compressive strength of sustainable concrete using machine learning tools

  • Lokesh Choudhary;Vaishali Sahu;Archanaa Dongre;Aman Garg
    • Computers and Concrete
    • /
    • v.33 no.2
    • /
    • pp.137-145
    • /
    • 2024
  • The technique of experimentally determining concrete's compressive strength for a given mix design is time-consuming and difficult. The goal of the current work is to propose a best working predictive model based on different machine learning algorithms such as Gradient Boosting Machine (GBM), Stacked Ensemble (SE), Distributed Random Forest (DRF), Extremely Randomized Trees (XRT), Generalized Linear Model (GLM), and Deep Learning (DL) that can forecast the compressive strength of ternary geopolymer concrete mix without carrying out any experimental procedure. A geopolymer mix uses supplementary cementitious materials obtained as industrial by-products instead of cement. The input variables used for assessing the best machine learning algorithm not only include individual ingredient quantities, but molarity of the alkali activator and age of testing as well. Myriad statistical parameters used to measure the effectiveness of the models in forecasting the compressive strength of ternary geopolymer concrete mix, it has been found that GBM performs better than all other algorithms. A sensitivity analysis carried out towards the end of the study suggests that GBM model predicts results close to the experimental conditions with an accuracy between 95.6 % to 98.2 % for testing and training datasets.

Load Fidelity Improvement of Piecewise Integrated Composite Beam by Irregular Arrangement of Reference Points (참조점의 불규칙적 배치를 통한 PIC보의 하중 충실도 향상에 관한 연구)

  • Ham, Seok Woo;Cho, Jae Ung;Cheon, Seong S.
    • Composites Research
    • /
    • v.32 no.5
    • /
    • pp.216-221
    • /
    • 2019
  • Piecewise integrated composite (PIC) beam has different stacking sequences for several regions with respect to their superior load-resisting capabilities. On the interest of current research is to improve bending characteristics of PIC beam, with assigning specific stacking sequence to a specific region with the help of machine learning techniques. 240 elements of from the FE model were chosen to be reference points. Preliminary FE analysis revealed triaxialities at those regularly distributed reference points to obtain learning data creation of machine learning. Triaxiality values catagorise the type of loading i.e. tension, compression or shear. Machine learning model was formulated by learning data as well as hyperparameters and proper load fidelity was suggested by tuned values of hyperparameters, however, comparatively higher nonlinearity intensive region, such as side face of the beam showed poor load fidelity. Therefore, irregular distribution of reference points, i.e., dense reference points were distributed in the severe changes of loading, on the contrary, coarse distribution for rare changes of loading, was prepared for machine learning model. FE model with irregularly distributed reference points showed better load fidelity compared to the results from the model with regular distribution of reference points.

A DDoS attack Mitigation in IoT Communications Using Machine Learning

  • Hailye Tekleselase
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.170-178
    • /
    • 2024
  • Through the growth of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have appeared to wireless communication system, especially in cybersecurity. And IoT networks are gradually attractive stages for introduction of DDoS attacks due to integral frailer security and resource-constrained nature of IoT devices. This paper emphases on detecting DDoS attack in wireless networks by categorizing inward network packets on the transport layer as either "abnormal" or "normal" using the integration of machine learning algorithms knowledge-based system. In this paper, deep learning algorithms and CNN were autonomously trained for mitigating DDoS attacks. This paper lays importance on misuse based DDOS attacks which comprise TCP SYN-Flood and ICMP flood. The researcher uses CICIDS2017 and NSL-KDD dataset in training and testing the algorithms (model) while the experimentation phase. accuracy score is used to measure the classification performance of the four algorithms. the results display that the 99.93 performance is recorded.

Wellness Prediction in Diabetes Mellitus Risks Via Machine Learning Classifiers

  • Saravanakumar M, Venkatesh;Sabibullah, M.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.203-208
    • /
    • 2022
  • The occurrence of Type 2 Diabetes Mellitus (T2DM) is hoarding globally. All kinds of Diabetes Mellitus is controlled to disrupt over 415 million grownups worldwide. It was the seventh prime cause of demise widespread with a measured 1.6 million deaths right prompted by diabetes during 2016. Over 90% of diabetes cases are T2DM, with the utmost persons having at smallest one other chronic condition in UK. In valuation of contemporary applications of Big Data (BD) to Diabetes Medicare by sighted its upcoming abilities, it is compulsory to transmit out a bottomless revision over foremost theoretical literatures. The long-term growth in medicine and, in explicit, in the field of "Diabetology", is powerfully encroached to a sequence of differences and inventions. The medical and healthcare data from varied bases like analysis and treatment tactics which assistances healthcare workers to guess the actual perceptions about the development of Diabetes Medicare measures accessible by them. Apache Spark extracts "Resilient Distributed Dataset (RDD)", a vital data structure distributed finished a cluster on machines. Machine Learning (ML) deals a note-worthy method for building elegant and automatic algorithms. ML library involving of communal ML algorithms like Support Vector Classification and Random Forest are investigated in this projected work by using Jupiter Notebook - Python code, where significant quantity of result (Accuracy) is carried out by the models.