• Title/Summary/Keyword: machine-learning

Search Result 5,627, Processing Time 0.032 seconds

Forecasting Sow's Productivity using the Machine Learning Models (머신러닝을 활용한 모돈의 생산성 예측모델)

  • Lee, Min-Soo;Choe, Young-Chan
    • Journal of Agricultural Extension & Community Development
    • /
    • v.16 no.4
    • /
    • pp.939-965
    • /
    • 2009
  • The Machine Learning has been identified as a promising approach to knowledge-based system development. This study aims to examine the ability of machine learning techniques for farmer's decision making and to develop the reference model for using pig farm data. We compared five machine learning techniques: logistic regression, decision tree, artificial neural network, k-nearest neighbor, and ensemble. All models are well performed to predict the sow's productivity in all parity, showing over 87.6% predictability. The model predictability of total litter size are highest at 91.3% in third parity and decreasing as parity increases. The ensemble is well performed to predict the sow's productivity. The neural network and logistic regression is excellent classifier for all parity. The decision tree and the k-nearest neighbor was not good classifier for all parity. Performance of models varies over models used, showing up to 104% difference in lift values. Artificial Neural network and ensemble models have resulted in highest lift values implying best performance among models.

  • PDF

Wearable Sensor-Based Biometric Gait Classification Algorithm Using WEKA

  • Youn, Ik-Hyun;Won, Kwanghee;Youn, Jong-Hoon;Scheffler, Jeremy
    • Journal of information and communication convergence engineering
    • /
    • v.14 no.1
    • /
    • pp.45-50
    • /
    • 2016
  • Gait-based classification has gained much interest as a possible authentication method because it incorporate an intrinsic personal signature that is difficult to mimic. The study investigates machine learning techniques to mitigate the natural variations in gait among different subjects. We incorporated several machine learning algorithms into this study using the data mining package called Waikato Environment for Knowledge Analysis (WEKA). WEKA's convenient interface enabled us to apply various sets of machine learning algorithms to understand whether each algorithm can capture certain distinctive gait features. First, we defined 24 gait features by analyzing three-axis acceleration data, and then selectively used them for distinguishing subjects 10 years of age or younger from those aged 20 to 40. We also applied a machine learning voting scheme to improve the accuracy of the classification. The classification accuracy of the proposed system was about 81% on average.

Design of Block-based Modularity Architecture for Machine Learning (머신러닝을 위한 블록형 모듈화 아키텍처 설계)

  • Oh, Yoosoo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.476-482
    • /
    • 2020
  • In this paper, we propose a block-based modularity architecture design method for distributed machine learning. The proposed architecture is a block-type module structure with various machine learning algorithms. It allows free expansion between block-type modules and allows multiple machine learning algorithms to be organically interlocked according to the situation. The architecture enables open data communication using the metadata query protocol. Also, the architecture makes it easy to implement an application service combining various edge computing devices by designing a communication method suitable for surrounding applications. To confirm the interlocking between the proposed block-type modules, we implemented a hardware-based modularity application system.

A Study on the Insider Behavior Analysis Using Machine Learning for Detecting Information Leakage (정보 유출 탐지를 위한 머신 러닝 기반 내부자 행위 분석 연구)

  • Kauh, Janghyuk;Lee, Dongho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.2
    • /
    • pp.1-11
    • /
    • 2017
  • In this paper, we design and implement PADIL(Prediction And Detection of Information Leakage) system that predicts and detect information leakage behavior of insider by analyzing network traffic and applying a variety of machine learning methods. we defined the five-level information leakage model(Reconnaissance, Scanning, Access and Escalation, Exfiltration, Obfuscation) by referring to the cyber kill-chain model. In order to perform the machine learning for detecting information leakage, PADIL system extracts various features by analyzing the network traffic and extracts the behavioral features by comparing it with the personal profile information and extracts information leakage level features. We tested various machine learning methods and as a result, the DecisionTree algorithm showed excellent performance in information leakage detection and we showed that performance can be further improved by fine feature selection.

Pipeline wall thinning rate prediction model based on machine learning

  • Moon, Seongin;Kim, Kyungmo;Lee, Gyeong-Geun;Yu, Yongkyun;Kim, Dong-Jin
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4060-4066
    • /
    • 2021
  • Flow-accelerated corrosion (FAC) of carbon steel piping is a significant problem in nuclear power plants. The basic process of FAC is currently understood relatively well; however, the accuracy of prediction models of the wall-thinning rate under an FAC environment is not reliable. Herein, we propose a methodology to construct pipe wall-thinning rate prediction models using artificial neural networks and a convolutional neural network, which is confined to a straight pipe without geometric changes. Furthermore, a methodology to generate training data is proposed to efficiently train the neural network for the development of a machine learning-based FAC prediction model. Consequently, it is concluded that machine learning can be used to construct pipe wall thinning rate prediction models and optimize the number of training datasets for training the machine learning algorithm. The proposed methodology can be applied to efficiently generate a large dataset from an FAC test to develop a wall thinning rate prediction model for a real situation.

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.

Comparison of Machine Learning Techniques for Cyberbullying Detection on YouTube Arabic Comments

  • Alsubait, Tahani;Alfageh, Danyah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.1-5
    • /
    • 2021
  • Cyberbullying is a problem that is faced in many cultures. Due to their popularity and interactive nature, social media platforms have also been affected by cyberbullying. Social media users from Arab countries have also reported being a target of cyberbullying. Machine learning techniques have been a prominent approach used by scientists to detect and battle this phenomenon. In this paper, we compare different machine learning algorithms for their performance in cyberbullying detection based on a labeled dataset of Arabic YouTube comments. Three machine learning models are considered, namely: Multinomial Naïve Bayes (MNB), Complement Naïve Bayes (CNB), and Linear Regression (LR). In addition, we experiment with two feature extraction methods, namely: Count Vectorizer and Tfidf Vectorizer. Our results show that, using count vectroizer feature extraction, the Logistic Regression model can outperform both Multinomial and Complement Naïve Bayes models. However, when using Tfidf vectorizer feature extraction, Complement Naive Bayes model can outperform the other two models.

Machine Learning based Bandwidth Prediction for Dynamic Adaptive Streaming over HTTP

  • Yoo, Soyoung;Kim, Gyeongryeong;Kim, Minji;Kim, Yeonjin;Park, Soeun;Kim, Dongho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.33-48
    • /
    • 2020
  • By Digital Transformation, new technologies like ML (Machine Learning), Big Data, Cloud, VR/AR are being used to video streaming technology. We choose ML to provide optimal QoE (Quality of Experience) in various network conditions. In other words, ML helps DASH in providing non-stopping video streaming. In DASH, the source video is segmented into short duration chunks of 2-10 seconds, each of which is encoded at several different bitrate levels and resolutions. We built and compared the performances of five prototypes after applying five different machine learning algorithms to DASH. The prototype consists of a dash.js, a video processing server, web servers, data sets, and five machine learning models.

Research Trends in Quantum Machine Learning (양자컴퓨팅 & 양자머신러닝 연구의 현재와 미래)

  • J.H. Bang
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.51-60
    • /
    • 2023
  • Quantum machine learning (QML) is an area of quantum computing that leverages its principles to develop machine learning algorithms and techniques. QML is aimed at combining traditional machine learning with the capabilities of quantum computing to devise approaches for problem solving and (big) data processing. Nevertheless, QML is in its early stage of the research and development. Thus, more theoretical studies are needed to understand whether a significant quantum speedup can be achieved compared with classical machine learning. If this is the case, the underlying physical principles may be explained. First, fundamental concepts and elements of QML should be established. We describe the inception and development of QML, highlighting essential quantum computing algorithms that are integral to QML. The advent of the noisy intermediate-scale quantum era and Google's demonstration of quantum supremacy are then addressed. Finally, we briefly discuss research prospects for QML.

Performance Comparison of Machine-learning Models for Analyzing Weather and Traffic Accident Correlations

  • Li Zi Xuan;Hyunho Yang
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.225-232
    • /
    • 2023
  • Owing to advancements in intelligent transportation systems (ITS) and artificial-intelligence technologies, various machine-learning models can be employed to simulate and predict the number of traffic accidents under different weather conditions. Furthermore, we can analyze the relationship between weather and traffic accidents, allowing us to assess whether the current weather conditions are suitable for travel, which can significantly reduce the risk of traffic accidents. In this study, we analyzed 30000 traffic flow data points collected by traffic cameras at nearby intersections in Washington, D.C., USA from October 2012 to May 2017, using Pearson's heat map. We then predicted, analyzed, and compared the performance of the correlation between continuous features by applying several machine-learning algorithms commonly used in ITS, including random forest, decision tree, gradient-boosting regression, and support vector regression. The experimental results indicated that the gradient-boosting regression machine-learning model had the best performance.