• Title/Summary/Keyword: Machine Learning & Training

Search Result 789, Processing Time 0.024 seconds

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.

Performance Evaluation of Machine Learning Algorithms for Cloud Removal of Optical Imagery: A Case Study in Cropland (광학 영상의 구름 제거를 위한 기계학습 알고리즘의 예측 성능 평가: 농경지 사례 연구)

  • Soyeon Park;Geun-Ho Kwak;Ho-Yong Ahn;No-Wook Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.507-519
    • /
    • 2023
  • Multi-temporal optical images have been utilized for time-series monitoring of croplands. However, the presence of clouds imposes limitations on image availability, often requiring a cloud removal procedure. This study assesses the applicability of various machine learning algorithms for effective cloud removal in optical imagery. We conducted comparative experiments by focusing on two key variables that significantly influence the predictive performance of machine learning algorithms: (1) land-cover types of training data and (2) temporal variability of land-cover types. Three machine learning algorithms, including Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF), were employed for the experiments using simulated cloudy images in paddy fields of Gunsan. GPR and SVM exhibited superior prediction accuracy when the training data had the same land-cover types as the cloud region, and GPR showed the best stability with respect to sampling fluctuations. In addition, RF was the least affected by the land-cover types and temporal variations of training data. These results indicate that GPR is recommended when the land-cover type and spectral characteristics of the training data are the same as those of the cloud region. On the other hand, RF should be applied when it is difficult to obtain training data with the same land-cover types as the cloud region. Therefore, the land-cover types in cloud areas should be taken into account for extracting informative training data along with selecting the optimal machine learning algorithm.

Pipeline wall thinning rate prediction model based on machine learning

  • Moon, Seongin;Kim, Kyungmo;Lee, Gyeong-Geun;Yu, Yongkyun;Kim, Dong-Jin
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4060-4066
    • /
    • 2021
  • Flow-accelerated corrosion (FAC) of carbon steel piping is a significant problem in nuclear power plants. The basic process of FAC is currently understood relatively well; however, the accuracy of prediction models of the wall-thinning rate under an FAC environment is not reliable. Herein, we propose a methodology to construct pipe wall-thinning rate prediction models using artificial neural networks and a convolutional neural network, which is confined to a straight pipe without geometric changes. Furthermore, a methodology to generate training data is proposed to efficiently train the neural network for the development of a machine learning-based FAC prediction model. Consequently, it is concluded that machine learning can be used to construct pipe wall thinning rate prediction models and optimize the number of training datasets for training the machine learning algorithm. The proposed methodology can be applied to efficiently generate a large dataset from an FAC test to develop a wall thinning rate prediction model for a real situation.

Application Consideration of Machine Learning Techniques in Satellite Systems

  • Jin-keun Hong
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.48-60
    • /
    • 2024
  • With the exponential growth of satellite data utilization, machine learning has become pivotal in enhancing innovation and cybersecurity in satellite systems. This paper investigates the role of machine learning techniques in identifying and mitigating vulnerabilities and code smells within satellite software. We explore satellite system architecture and survey applications like vulnerability analysis, source code refactoring, and security flaw detection, emphasizing feature extraction methodologies such as Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). We present practical examples of feature extraction and training models using machine learning techniques like Random Forests, Support Vector Machines, and Gradient Boosting. Additionally, we review open-access satellite datasets and address prevalent code smells through systematic refactoring solutions. By integrating continuous code review and refactoring into satellite software development, this research aims to improve maintainability, scalability, and cybersecurity, providing novel insights for the advancement of satellite software development and security. The value of this paper lies in its focus on addressing the identification of vulnerabilities and resolution of code smells in satellite software. In terms of the authors' contributions, we detail methods for applying machine learning to identify potential vulnerabilities and code smells in satellite software. Furthermore, the study presents techniques for feature extraction and model training, utilizing Abstract Syntax Trees (AST) and Control Flow Graphs (CFG) to extract relevant features for machine learning training. Regarding the results, we discuss the analysis of vulnerabilities, the identification of code smells, maintenance, and security enhancement through practical examples. This underscores the significant improvement in the maintainability and scalability of satellite software through continuous code review and refactoring.

Machine Learning Data Extension Way for Confirming Genuine of Trademark Image which is Rotated (회전한 상표 이미지의 진위 결정을 위한 기계 학습 데이터 확장 방법)

  • Gu, Bongen
    • Journal of Platform Technology
    • /
    • v.8 no.1
    • /
    • pp.16-23
    • /
    • 2020
  • For protecting copyright for trademark, convolutional neural network can be used to confirm genuine of trademark image. For this, repeated training one trademark image degrades the performance of machine learning because of overfitting problem. Therefore, this type of machine learning application generates training data in various way. But if genuine trademark image is rotated, this image is classified as not genuine trademark. In this paper, we propose the way for extending training data to confirm genuine of trademark image which is rotated. Our proposed way generates rotated image from genuine trademark image as training data. To show effectiveness of our proposed way, we use CNN machine learning model, and evaluate the accuracy with test image. From evaluation result, our way can be used to generate training data for machine learning application which confirms genuine of rotated trademark image.

  • PDF

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Analysis of Machine Learning Education Tool for Kids

  • Lee, Yo-Seob;Moon, Phil-Joo
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.235-241
    • /
    • 2020
  • Artificial intelligence and machine learning are used in many parts of our daily lives, but the basic processes and concepts are barely exposed to most people. Understanding these basic concepts is becoming increasingly important as kids don't have the opportunity to explore AI processes and improve their understanding of basic machine learning concepts and their essential components. Machine learning educational tools can help children easily understand artificial intelligence and machine learning. In this paper, we examine machine learning education tools and compare their features.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF