• Title/Summary/Keyword: machine-learning method

Search Result 2,058, Processing Time 0.025 seconds

Machine Learning-based SOH Estimation Algorithm Using a Linear Regression Analysis (선형 회귀 분석법을 이용한 머신 러닝 기반의 SOH 추정 알고리즘)

  • Kang, Seung-Hyun;Noh, Tae-Won;Lee, Byoung-Kuk
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.26 no.4
    • /
    • pp.241-248
    • /
    • 2021
  • A battery state-of-health (SOH) estimation algorithm using a machine learning-based linear regression method is proposed for estimating battery aging. The proposed algorithm analyzes the change trend of the open-circuit voltage (OCV) curve, which is a parameter related to SOH. At this time, a section with high linearity of the SOH and OCV curves is selected and used for SOH estimation. The SOH of the aged battery is estimated according to the selected interval using a machine learning-based linear regression method. The performance of the proposed battery SOH estimation algorithm is verified through experiments and simulations using battery packs for electric vehicles.

Machine learning-based Multi-modal Sensing IoT Platform Resource Management (머신러닝 기반 멀티모달 센싱 IoT 플랫폼 리소스 관리 지원)

  • Lee, Seongchan;Sung, Nakmyoung;Lee, Seokjun;Jun, Jaeseok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.93-100
    • /
    • 2022
  • In this paper, we propose a machine learning-based method for supporting resource management of IoT software platforms in a multi-modal sensing scenario. We assume that an IoT device installed with a oneM2M-compatible software platform is connected with various sensors such as PIR, sound, dust, ambient light, ultrasonic, accelerometer, through different embedded system interfaces such as general purpose input output (GPIO), I2C, SPI, USB. Based on a collected dataset including CPU usage and user-defined priority, a machine learning model is trained to estimate the level of nice value required to adjust according to the resource usage patterns. The proposed method is validated by comparing with a rule-based control strategy, showing its practical capability in a multi-modal sensing scenario of IoT devices.

An Approach to Applying Multiple Linear Regression Models by Interlacing Data in Classifying Similar Software

  • Lim, Hyun-il
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.268-281
    • /
    • 2022
  • The development of information technology is bringing many changes to everyday life, and machine learning can be used as a technique to solve a wide range of real-world problems. Analysis and utilization of data are essential processes in applying machine learning to real-world problems. As a method of processing data in machine learning, we propose an approach based on applying multiple linear regression models by interlacing data to the task of classifying similar software. Linear regression is widely used in estimation problems to model the relationship between input and output data. In our approach, multiple linear regression models are generated by training on interlaced feature data. A combination of these multiple models is then used as the prediction model for classifying similar software. Experiments are performed to evaluate the proposed approach as compared to conventional linear regression, and the experimental results show that the proposed method classifies similar software more accurately than the conventional model. We anticipate the proposed approach to be applied to various kinds of classification problems to improve the accuracy of conventional linear regression.

Digital signal change through artificial intelligence machine learning method comparison and learning (인공지능 기계학습 방법 비교와 학습을 통한 디지털 신호변화)

  • Yi, Dokkyun;Park, Jieun
    • Journal of Digital Convergence
    • /
    • v.17 no.10
    • /
    • pp.251-258
    • /
    • 2019
  • In the future, various products are created in various fields using artificial intelligence. In this age, it is a very important problem to know the operation principle of artificial intelligence learning method and to use it correctly. This paper introduces artificial intelligence learning methods that have been known so far. Learning of artificial intelligence is based on the fixed point iteration method of mathematics. The GD(Gradient Descent) method, which adjusts the convergence speed based on the fixed point iteration method, the Momentum method to summate the amount of gradient, and finally, the Adam method that mixed these methods. This paper describes the advantages and disadvantages of each method. In particularly, the Adam method having adaptivity controls learning ability of machine learning. And we analyze how these methods affect digital signals. The changes in the learning process of digital signals are the basis of accurate application and accurate judgment in the future work and research using artificial intelligence.

Study on the Improvement of Machine Learning Ability through Data Augmentation (데이터 증강을 통한 기계학습 능력 개선 방법 연구)

  • Kim, Tae-woo;Shin, Kwang-seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.346-347
    • /
    • 2021
  • For pattern recognition for machine learning, the larger the amount of learning data, the better its performance. However, it is not always possible to secure a large amount of learning data with the types and information of patterns that must be detected in daily life. Therefore, it is necessary to significantly inflate a small data set for general machine learning. In this study, we study techniques to augment data so that machine learning can be performed. A representative method of performing machine learning using a small data set is the transfer learning technique. Transfer learning is a method of obtaining a result by performing basic learning with a general-purpose data set and then substituting the target data set into the final stage. In this study, a learning model trained with a general-purpose data set such as ImageNet is used as a feature extraction set using augmented data to detect a desired pattern.

  • PDF

Wearable Sensor-Based Biometric Gait Classification Algorithm Using WEKA

  • Youn, Ik-Hyun;Won, Kwanghee;Youn, Jong-Hoon;Scheffler, Jeremy
    • Journal of information and communication convergence engineering
    • /
    • v.14 no.1
    • /
    • pp.45-50
    • /
    • 2016
  • Gait-based classification has gained much interest as a possible authentication method because it incorporate an intrinsic personal signature that is difficult to mimic. The study investigates machine learning techniques to mitigate the natural variations in gait among different subjects. We incorporated several machine learning algorithms into this study using the data mining package called Waikato Environment for Knowledge Analysis (WEKA). WEKA's convenient interface enabled us to apply various sets of machine learning algorithms to understand whether each algorithm can capture certain distinctive gait features. First, we defined 24 gait features by analyzing three-axis acceleration data, and then selectively used them for distinguishing subjects 10 years of age or younger from those aged 20 to 40. We also applied a machine learning voting scheme to improve the accuracy of the classification. The classification accuracy of the proposed system was about 81% on average.

Estimation of various amounts of kaolinite on concrete alkali-silica reactions using different machine learning methods

  • Aflatoonian, Moein;Mirhosseini, Ramin Tabatabaei
    • Structural Engineering and Mechanics
    • /
    • v.83 no.1
    • /
    • pp.79-92
    • /
    • 2022
  • In this paper, the impact of a vernacular pozzolanic kaolinite mine on concrete alkali-silica reaction and strength has been evaluated. For making the samples, kaolinite powder with various levels has been used in the quality specification test of aggregates based on the ASTM C1260 standard in order to investigate the effect of kaolinite particles on reducing the reaction of the mortar bars. The compressive strength, X-Ray Diffraction (XRD) and Scanning Electron Microscope (SEM) experiments have been performed on concrete specimens. The obtained results show that addition of kaolinite powder to concrete will cause a pozzolanic reaction and decrease the permeability of concrete samples comparing to the reference concrete specimen. Further, various machine learning methods have been used to predict ASR-induced expansion per different amounts of kaolinite. In the process of modeling methods, optimal method is considered to have the lowest mean square error (MSE) simultaneous to having the highest correlation coefficient (R). Therefore, to evaluate the efficiency of the proposed model, the results of the support vector machine (SVM) method were compared with the decision tree method, regression analysis and neural network algorithm. The results of comparison of forecasting tools showed that support vector machines have outperformed the results of other methods. Therefore, the support vector machine method can be mentioned as an effective approach to predict ASR-induced expansion.

A Study on the Learning Method of Documents for Implementation of Automated Documents Classificator (문서 자동 분류기의 구현을 위한 문서 학습 방법에 관한 연구)

  • 선복근;이인정;한광록
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1001-1004
    • /
    • 1999
  • We study on machine learning method for automatic document categorization using back propagation algorithm. Four categories are classified for the experiment and the system learns with 20 documents per a category by this method. As a result of the machine learning, we can find that a new document is automatically classified with a category according to the predefined ones.

  • PDF

Development of Galaxy Image Classification Based on Hand-crafted Features and Machine Learning (Hand-crafted 특징 및 머신 러닝 기반의 은하 이미지 분류 기법 개발)

  • Oh, Yoonju;Jung, Heechul
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.1
    • /
    • pp.17-27
    • /
    • 2021
  • In this paper, we develop a galaxy image classification method based on hand-crafted features and machine learning techniques. Additionally, we provide an empirical analysis to reveal which combination of the techniques is effective for galaxy image classification. To achieve this, we developed a framework which consists of four modules such as preprocessing, feature extraction, feature post-processing, and classification. Finally, we found that the best technique for galaxy image classification is a method to use a median filter, ORB vector features and a voting classifier based on RBF SVM, random forest and logistic regression. The final method is efficient so we believe that it is applicable to embedded environments.

Effectiveness of Normalization Pre-Processing of Big Data to the Machine Learning Performance (빅데이터의 정규화 전처리과정이 기계학습의 성능에 미치는 영향)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.3
    • /
    • pp.547-552
    • /
    • 2019
  • Recently, the massive growth in the scale of data has been observed as a major issue in the Big Data. Furthermore, the Big Data should be preprocessed for normalization to get a high performance of the Machine learning since the Big Data is also an input of Machine Learning. The performance varies by many factors such as the scope of the columns in a Big Data or the methods of normalization preprocessing. In this paper, the various types of normalization preprocessing methods and the scopes of the Big Data columns will be applied to the SVM(: Support Vector Machine) as a Machine Learning method to get the efficient environment for the normalization preprocessing. The Machine Learning experiment has been programmed in Python and the Jupyter Notebook.