• Title/Summary/Keyword: Machine Learning

Search Result 5,378, Processing Time 0.028 seconds

Pipeline wall thinning rate prediction model based on machine learning

  • Moon, Seongin;Kim, Kyungmo;Lee, Gyeong-Geun;Yu, Yongkyun;Kim, Dong-Jin
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4060-4066
    • /
    • 2021
  • Flow-accelerated corrosion (FAC) of carbon steel piping is a significant problem in nuclear power plants. The basic process of FAC is currently understood relatively well; however, the accuracy of prediction models of the wall-thinning rate under an FAC environment is not reliable. Herein, we propose a methodology to construct pipe wall-thinning rate prediction models using artificial neural networks and a convolutional neural network, which is confined to a straight pipe without geometric changes. Furthermore, a methodology to generate training data is proposed to efficiently train the neural network for the development of a machine learning-based FAC prediction model. Consequently, it is concluded that machine learning can be used to construct pipe wall thinning rate prediction models and optimize the number of training datasets for training the machine learning algorithm. The proposed methodology can be applied to efficiently generate a large dataset from an FAC test to develop a wall thinning rate prediction model for a real situation.

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.

Comparison of Machine Learning Techniques for Cyberbullying Detection on YouTube Arabic Comments

  • Alsubait, Tahani;Alfageh, Danyah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.1-5
    • /
    • 2021
  • Cyberbullying is a problem that is faced in many cultures. Due to their popularity and interactive nature, social media platforms have also been affected by cyberbullying. Social media users from Arab countries have also reported being a target of cyberbullying. Machine learning techniques have been a prominent approach used by scientists to detect and battle this phenomenon. In this paper, we compare different machine learning algorithms for their performance in cyberbullying detection based on a labeled dataset of Arabic YouTube comments. Three machine learning models are considered, namely: Multinomial Naïve Bayes (MNB), Complement Naïve Bayes (CNB), and Linear Regression (LR). In addition, we experiment with two feature extraction methods, namely: Count Vectorizer and Tfidf Vectorizer. Our results show that, using count vectroizer feature extraction, the Logistic Regression model can outperform both Multinomial and Complement Naïve Bayes models. However, when using Tfidf vectorizer feature extraction, Complement Naive Bayes model can outperform the other two models.

Machine Learning based Bandwidth Prediction for Dynamic Adaptive Streaming over HTTP

  • Yoo, Soyoung;Kim, Gyeongryeong;Kim, Minji;Kim, Yeonjin;Park, Soeun;Kim, Dongho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.33-48
    • /
    • 2020
  • By Digital Transformation, new technologies like ML (Machine Learning), Big Data, Cloud, VR/AR are being used to video streaming technology. We choose ML to provide optimal QoE (Quality of Experience) in various network conditions. In other words, ML helps DASH in providing non-stopping video streaming. In DASH, the source video is segmented into short duration chunks of 2-10 seconds, each of which is encoded at several different bitrate levels and resolutions. We built and compared the performances of five prototypes after applying five different machine learning algorithms to DASH. The prototype consists of a dash.js, a video processing server, web servers, data sets, and five machine learning models.

Research Trends in Quantum Machine Learning (양자컴퓨팅 & 양자머신러닝 연구의 현재와 미래)

  • J.H. Bang
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.51-60
    • /
    • 2023
  • Quantum machine learning (QML) is an area of quantum computing that leverages its principles to develop machine learning algorithms and techniques. QML is aimed at combining traditional machine learning with the capabilities of quantum computing to devise approaches for problem solving and (big) data processing. Nevertheless, QML is in its early stage of the research and development. Thus, more theoretical studies are needed to understand whether a significant quantum speedup can be achieved compared with classical machine learning. If this is the case, the underlying physical principles may be explained. First, fundamental concepts and elements of QML should be established. We describe the inception and development of QML, highlighting essential quantum computing algorithms that are integral to QML. The advent of the noisy intermediate-scale quantum era and Google's demonstration of quantum supremacy are then addressed. Finally, we briefly discuss research prospects for QML.

Performance Comparison of Machine-learning Models for Analyzing Weather and Traffic Accident Correlations

  • Li Zi Xuan;Hyunho Yang
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.225-232
    • /
    • 2023
  • Owing to advancements in intelligent transportation systems (ITS) and artificial-intelligence technologies, various machine-learning models can be employed to simulate and predict the number of traffic accidents under different weather conditions. Furthermore, we can analyze the relationship between weather and traffic accidents, allowing us to assess whether the current weather conditions are suitable for travel, which can significantly reduce the risk of traffic accidents. In this study, we analyzed 30000 traffic flow data points collected by traffic cameras at nearby intersections in Washington, D.C., USA from October 2012 to May 2017, using Pearson's heat map. We then predicted, analyzed, and compared the performance of the correlation between continuous features by applying several machine-learning algorithms commonly used in ITS, including random forest, decision tree, gradient-boosting regression, and support vector regression. The experimental results indicated that the gradient-boosting regression machine-learning model had the best performance.

Compact Modeling for Nanosheet FET Based on TCAD-Machine Learning (TCAD-머신러닝 기반 나노시트 FETs 컴팩트 모델링)

  • Junhyeok Song;Wonbok Lee;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.136-141
    • /
    • 2023
  • The continuous shrinking of transistors in integrated circuits leads to difficulties in improving performance, resulting in the emerging transistors such as nanosheet field-effect transistors. In this paper, we propose a TCAD-machine learning framework of nanosheet FETs to model the current-voltage characteristics. Sentaurus TCAD simulations of nanosheet FETs are performed to obtain a large amount of device data. A machine learning model of I-V characteristics is trained using the multi-layer perceptron from these TCAD data. The weights and biases obtained from multi-layer perceptron are implemented in a PSPICE netlist to verify the accuracy of I-V and the DC transfer characteristics of a CMOS inverter. It is found that the proposed machine learning model is applicable to the prediction of nanosheet field-effect transistors device and circuit performance.

  • PDF

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Machine Learning Techniques for Diabetic Retinopathy Detection: A Review

  • Rachna Kumari;Sanjeev Kumar;Sunila Godara
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.67-76
    • /
    • 2024
  • Diabetic retinopathy is a threatening complication of diabetes, caused by damaged blood vessels of light sensitive areas of retina. DR leads to total or partial blindness if left untreated. DR does not give any symptoms at early stages so earlier detection of DR is a big challenge for proper treatment of diseases. With advancement of technology various computer-aided diagnostic programs using image processing and machine learning approaches are designed for early detection of DR so that proper treatment can be provided to the patients for preventing its harmful effects. Now a day machine learning techniques are widely applied for image processing. These techniques also provide amazing result in this field also. In this paper we discuss various machine learning and deep learning based techniques developed for automatic detection of Diabetic Retinopathy.

Deep Structured Learning: Architectures and Applications

  • Lee, Soowook
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.262-265
    • /
    • 2018
  • Deep learning, a sub-field of machine learning changing the prospects of artificial intelligence (AI) because of its recent advancements and application in various field. Deep learning deals with algorithms inspired by the structure and function of the brain called artificial neural networks. This works reviews basic architecture and recent advancement of deep structured learning. It also describes contemporary applications of deep structured learning and its advantages over the treditional learning in artificial interlligence. This study is useful for the general readers and students who are in the early stage of deep learning studies.