• Title/Summary/Keyword: Machine Learning

Search Result 5,378, Processing Time 0.03 seconds

A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics (실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구)

  • Ku, Jin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.1
    • /
    • pp.201-206
    • /
    • 2018
  • Recently, as technologies for realizing artificial intelligence have become more common, machine learning is widely used. Machine learning provides insight into collecting large amounts of data, batch processing, and taking final action, but the effects of the work are not immediately integrated into the learning process. In this paper proposed an adaptive learning model to improve the performance of real-time stream analysis as a big business issue. Adaptive learning generates the ensemble by adapting to the complexity of the data set, and the algorithm uses the data needed to determine the optimal data point to sample. In an experiment for six standard data sets, the adaptive learning model outperformed the simple machine learning model for classification at the learning time and accuracy. In particular, the support vector machine showed excellent performance at the end of all ensembles. Adaptive learning is expected to be applicable to a wide range of problems that need to be adaptively updated in the inference of changes in various parameters over time.

Study on the Improvement of Machine Learning Ability through Data Augmentation (데이터 증강을 통한 기계학습 능력 개선 방법 연구)

  • Kim, Tae-woo;Shin, Kwang-seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.346-347
    • /
    • 2021
  • For pattern recognition for machine learning, the larger the amount of learning data, the better its performance. However, it is not always possible to secure a large amount of learning data with the types and information of patterns that must be detected in daily life. Therefore, it is necessary to significantly inflate a small data set for general machine learning. In this study, we study techniques to augment data so that machine learning can be performed. A representative method of performing machine learning using a small data set is the transfer learning technique. Transfer learning is a method of obtaining a result by performing basic learning with a general-purpose data set and then substituting the target data set into the final stage. In this study, a learning model trained with a general-purpose data set such as ImageNet is used as a feature extraction set using augmented data to detect a desired pattern.

  • PDF

Systematic Research on Privacy-Preserving Distributed Machine Learning (프라이버시를 보호하는 분산 기계 학습 연구 동향)

  • Min Seob Lee;Young Ah Shin;Ji Young Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.76-90
    • /
    • 2024
  • Although artificial intelligence (AI) can be utilized in various domains such as smart city, healthcare, it is limited due to concerns about the exposure of personal and sensitive information. In response, the concept of distributed machine learning has emerged, wherein learning occurs locally before training a global model, mitigating the concentration of data on a central server. However, overall learning phase in a collaborative way among multiple participants poses threats to data privacy. In this paper, we systematically analyzes recent trends in privacy protection within the realm of distributed machine learning, considering factors such as the presence of a central server, distribution environment of the training datasets, and performance variations among participants. In particular, we focus on key distributed machine learning techniques, including horizontal federated learning, vertical federated learning, and swarm learning. We examine privacy protection mechanisms within these techniques and explores potential directions for future research.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Shield TBM disc cutter replacement and wear rate prediction using machine learning techniques

  • Kim, Yunhee;Hong, Jiyeon;Shin, Jaewoo;Kim, Bumjoo
    • Geomechanics and Engineering
    • /
    • v.29 no.3
    • /
    • pp.249-258
    • /
    • 2022
  • A disc cutter is an excavation tool on a tunnel boring machine (TBM) cutterhead; it crushes and cuts rock mass while the machine excavates using the cutterhead's rotational movement. Disc cutter wear occurs naturally. Thus, along with the management of downtime and excavation efficiency, abrasioned disc cutters need to be replaced at the proper time; otherwise, the construction period could be delayed and the cost could increase. The most common prediction models for TBM performance and for the disc cutter lifetime have been proposed by the Colorado School of Mines and Norwegian University of Science and Technology. However, design parameters of existing models do not well correspond to the field values when a TBM encounters complex and difficult ground conditions in the field. Thus, this study proposes a series of machine learning models to predict the disc cutter lifetime of a shield TBM using the excavation (machine) data during operation which is response to the rock mass. This study utilizes five different machine learning techniques: four types of classification models (i.e., K-Nearest Neighbors (KNN), Support Vector Machine, Decision Tree, and Staking Ensemble Model) and one artificial neural network (ANN) model. The KNN model was found to be the best model among the four classification models, affording the highest recall of 81%. The ANN model also predicted the wear rate of disc cutters reasonably well.

Stress Identification and Analysis using Observed Heart Beat Data from Smart HRM Sensor Device

  • Pramanta, SPL Aditya;Kim, Myonghee;Park, Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1395-1405
    • /
    • 2017
  • In this paper, we analyses heart beat data to identify subjects stress state (binary) using heart rate variability (HRV) features extracted from heart beat data of the subjects and implement supervised machine learning techniques to create the mental stress classifier. There are four steps need to be done: data acquisition, data processing (HRV analysis), features selection, and machine learning, before doing performance measurement. There are 56 features generated from the HRV Analysis module with several of them are selected (using own algorithm) after computing the Pearson Correlation Matrix (p-values). The results of the list of selected features compared with all features data are compared by its model error after training using several machine learning techniques: support vector machine, decision tree, and discriminant analysis. SVM model and decision tree model with using selected features shows close results compared to using all recording by only 1% difference. Meanwhile, the discriminant analysis differs about 5%. All the machine learning method used in this works have 90% maximum average accuracy.

SVM based Stock Price Forecasting Using Financial Statements (SVM 기반의 재무 정보를 이용한 주가 예측)

  • Heo, Junyoung;Yang, Jin Yong
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.3
    • /
    • pp.167-172
    • /
    • 2015
  • Machine learning is a technique for training computers to be used in classification or forecasting. Among the various types, support vector machine (SVM) is a fast and reliable machine learning mechanism. In this paper, we evaluate the stock price predictability of SVM based on financial statements, through a fundamental analysis predicting the stock price from the corporate intrinsic values. Corporate financial statements were used as the input for SVM. Based on the results, the rise or drop of the stock was predicted. The SVM results were compared with the forecasts of experts, as well as other machine learning methods such as ANN, decision tree and AdaBoost. SVM showed good predictive power while requiring less execution time than the other machine learning schemes.

Application of Multi-Layer Perceptron and Random Forest Method for Cylinder Plate Forming (Multi-Layer Perceptron과 Random Forest를 이용한 실린더 판재의 성형 조건 예측)

  • Kim, Seong-Kyeom;Hwang, Se-Yun;Lee, Jang-Hyun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.57 no.5
    • /
    • pp.297-304
    • /
    • 2020
  • In this study, the prediction method was reviewed to process a cylindrical plate forming using machine learning as a data-driven approach by roll bending equipment. The calculation of the forming variables was based on the analysis using the mechanical relationship between the material properties and the roll bending machine in the bending process. Then, by applying the finite element analysis method, the accuracy of the deformation prediction model was reviewed, and a large number data set was created to apply to machine learning using the finite element analysis model for deformation prediction. As a result of the application of the machine learning model, it was confirmed that the calculation is slightly higher than the linear regression method. Applicable results were confirmed through the machine learning method.

Classification of Fall Direction Before Impact Using Machine Learning Based on IMU Raw Signals (IMU 원신호 기반의 기계학습을 통한 충격전 낙상방향 분류)

  • Lee, Hyeon Bin;Lee, Chang June;Lee, Jung Keun
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.2
    • /
    • pp.96-101
    • /
    • 2022
  • As the elderly population gradually increases, the risk of fatal fall accidents among the elderly is increasing. One way to cope with a fall accident is to determine the fall direction before impact using a wearable inertial measurement unit (IMU). In this context, a previous study proposed a method of classifying fall directions using a support vector machine with sensor velocity, acceleration, and tilt angle as input parameters. However, in this method, the IMU signals are processed through several processes, including a Kalman filter and the integration of acceleration, which involves a large amount of computation and error factors. Therefore, this paper proposes a machine learning-based method that classifies the fall direction before impact using IMU raw signals rather than processed data. In this study, we investigated the effects of the following two factors on the classification performance: (1) the usage of processed/raw signals and (2) the selection of machine learning techniques. First, as a result of comparing the processed/raw signals, the difference in sensitivities between the two methods was within 5%, indicating an equivalent level of classification performance. Second, as a result of comparing six machine learning techniques, K-nearest neighbor and naive Bayes exhibited excellent performance with a sensitivity of 86.0% and 84.1%, respectively.

Prediction of Cryogenic- and Room-Temperature Deformation Behavior of Rolled Titanium using Machine Learning (타이타늄 압연재의 기계학습 기반 극저온/상온 변형거동 예측)

  • S. Cheon;J. Yu;S.H. Lee;M.-S. Lee;T.-S. Jun;T. Lee
    • Transactions of Materials Processing
    • /
    • v.32 no.2
    • /
    • pp.74-80
    • /
    • 2023
  • A deformation behavior of commercially pure titanium (CP-Ti) is highly dependent on material and processing parameters, such as deformation temperature, deformation direction, and strain rate. This study aims to predict the multivariable and nonlinear tensile behavior of CP-Ti using machine learning based on three algorithms: artificial neural network (ANN), light gradient boosting machine (LGBM), and long short-term memory (LSTM). The predictivity for tensile behaviors at the cryogenic temperature was lower than those in the room temperature due to the larger data scattering in the train dataset used in the machine learning. Although LGBM showed the lowest value of root mean squared error, it was not the best strategy owing to the overfitting and step-function morphology different from the actual data. LSTM performed the best as it effectively learned the continuous characteristics of a flow curve as well as it spent the reduced time for machine learning, even without sufficient database and hyperparameter tuning.