• 제목/요약/키워드: machine learning (ML)

검색결과 268건 처리시간 0.023초

베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발 (Development of benthic macroinvertebrate species distribution models using the Bayesian optimization)

  • 고병건;신지훈;차윤경
    • 상하수도학회지
    • /
    • 제35권4호
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.

Unity ML-Agents Toolkit을 활용한 대상 객체 추적 머신러닝 구현 (Implementation of Target Object Tracking Method using Unity ML-Agent Toolkit)

  • 한석호;이용환
    • 반도체디스플레이기술학회지
    • /
    • 제21권3호
    • /
    • pp.110-113
    • /
    • 2022
  • Non-playable game character plays an important role in improving the concentration of the game and the interest of the user, and recently implementation of NPC with reinforcement learning has been in the spotlight. In this paper, we estimate an AI target tracking method via reinforcement learning, and implement an AI-based tracking agency of specific target object with avoiding traps through Unity ML-Agents Toolkit. The implementation is built in Unity game engine, and simulations are conducted through a number of experiments. The experimental results show that outstanding performance of the tracking target with avoiding traps is shown with good enough results.

Predicting the maximum lateral load of reinforced concrete columns with traditional machine learning, deep learning, and structural analysis software

  • Pelin Canbay;Sila Avgin;Mehmet M. Kose
    • Computers and Concrete
    • /
    • 제33권3호
    • /
    • pp.285-299
    • /
    • 2024
  • Recently, many engineering computations have realized their digital transformation to Machine Learning (ML)-based systems. Predicting the behavior of a structure, which is mainly computed with structural analysis software, is an essential step before construction for efficient structural analysis. Especially in the seismic-based design procedure of the structures, predicting the lateral load capacity of reinforced concrete (RC) columns is a vital factor. In this study, a novel ML-based model is proposed to predict the maximum lateral load capacity of RC columns under varying axial loads or cyclic loadings. The proposed model is generated with a Deep Neural Network (DNN) and compared with traditional ML techniques as well as a popular commercial structural analysis software. In the design and test phases of the proposed model, 319 columns with rectangular and square cross-sections are incorporated. In this study, 33 parameters are used to predict the maximum lateral load capacity of each RC column. While some traditional ML techniques perform better prediction than the compared commercial software, the proposed DNN model provides the best prediction results within the analysis. The experimental results reveal the fact that the performance of the proposed DNN model can definitely be used for other engineering purposes as well.

Comparison of machine learning algorithms for regression and classification of ultimate load-carrying capacity of steel frames

  • Kim, Seung-Eock;Vu, Quang-Viet;Papazafeiropoulos, George;Kong, Zhengyi;Truong, Viet-Hung
    • Steel and Composite Structures
    • /
    • 제37권2호
    • /
    • pp.193-209
    • /
    • 2020
  • In this paper, the efficiency of five Machine Learning (ML) methods consisting of Deep Learning (DL), Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), and Gradient Tree Booting (GTB) for regression and classification of the Ultimate Load Factor (ULF) of nonlinear inelastic steel frames is compared. For this purpose, a two-story, a six-story, and a twenty-story space frame are considered. An advanced nonlinear inelastic analysis is carried out for the steel frames to generate datasets for the training of the considered ML methods. In each dataset, the input variables are the geometric features of W-sections and the output variable is the ULF of the frame. The comparison between the five ML methods is made in terms of the mean-squared-error (MSE) for the regression models and the accuracy for the classification models, respectively. Moreover, the ULF distribution curve is calculated for each frame and the strength failure probability is estimated. It is found that the GTB method has the best efficiency in both regression and classification of ULF regardless of the number of training samples and the space frames considered.

Identification of shear transfer mechanisms in RC beams by using machine-learning technique

  • Zhang, Wei;Lee, Deuckhang;Ju, Hyunjin;Wang, Lei
    • Computers and Concrete
    • /
    • 제30권1호
    • /
    • pp.43-74
    • /
    • 2022
  • Machine learning technique is recently opening new opportunities to identify the complex shear transfer mechanisms of reinforced concrete (RC) beam members. This study employed 1224 shear test specimens to train decision tree-based machine learning (ML) programs, by which strong correlations between shear capacity of RC beams and key input parameters were affirmed. In addition, shear contributions of concrete and shear reinforcement (the so-called Vc and Vs) were identified by establishing three independent ML models trained under different strategies with various combinations of datasets. Detailed parametric studies were then conducted by utilizing the well-trained ML models. It appeared that the presence of shear reinforcement can make the predicted shear contribution from concrete in RC beams larger than the pure shear contribution of concrete due to the intervention effect between shear reinforcement and concrete. On the other hand, the size effect also brought a significant impact on the shear contribution of concrete (Vc), whereas, the addition of shear reinforcements can effectively mitigate the size effect. It was also found that concrete tends to be the primary source of shear resistance when shear span-depth ratio a/d<1.0 while shear reinforcements become the primary source of shear resistance when a/d>2.0.

Machine learning of LWR spent nuclear fuel assembly decay heat measurements

  • Ebiwonjumi, Bamidele;Cherezov, Alexey;Dzianisau, Siarhei;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • 제53권11호
    • /
    • pp.3563-3579
    • /
    • 2021
  • Measured decay heat data of light water reactor (LWR) spent nuclear fuel (SNF) assemblies are adopted to train machine learning (ML) models. The measured data is available for fuel assemblies irradiated in commercial reactors operated in the United States and Sweden. The data comes from calorimetric measurements of discharged pressurized water reactor (PWR) and boiling water reactor (BWR) fuel assemblies. 91 and 171 measurements of PWR and BWR assembly decay heat data are used, respectively. Due to the small size of the measurement dataset, we propose: (i) to use the method of multiple runs (ii) to generate and use synthetic data, as large dataset which has similar statistical characteristics as the original dataset. Three ML models are developed based on Gaussian process (GP), support vector machines (SVM) and neural networks (NN), with four inputs including the fuel assembly averaged enrichment, assembly averaged burnup, initial heavy metal mass, and cooling time after discharge. The outcomes of this work are (i) development of ML models which predict LWR fuel assembly decay heat from the four inputs (ii) generation and application of synthetic data which improves the performance of the ML models (iii) uncertainty analysis of the ML models and their predictions.

ML 기반의 영상처리를 통한 알람 프로그램 (Alarm program through image processing based on Machine Learning)

  • 김덕민;정현우;박구만
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2021년도 추계학술대회
    • /
    • pp.304-307
    • /
    • 2021
  • ML(machine learning) 기술을 활용하여 실용적인 측면에서 일반 사용자들이 바라보고 사용할 수 있도록 다양한 연구 개발이 이루어지고 있다. 특히 최근 개인 사용자의 personal computer와 mobile device의 processing unit의 연산 처리 속도가 두드러지게 빨라지고 있어 ML이 더 생활에 밀접해지고 있는 추세라고 볼 수 있다. 현재 ML시장에서 다양한 솔루션 및 어플리케이션을 제공하는 툴이나 라이브러리가 대거 공개되고 있는데 그 중에서도 Google에서 개발하여 배포한 'Mediapipe'를 사용하였다. Mediapipe는 현재 'android', 'IOS', 'C++', 'Python', 'JS', 'Coral' 등의 환경에서 개발을 지원하고 있으며 더욱 다양한 환경을 지원할 예정이다. 이에 본 팀은 앞서 설명한 Mediapipe 프레임워크를 기반으로 Machine Learning을 사용한 image processing를 통해 일반 사용자들에게 편의성을 제공할 수 있는 알람 프로그램을 연구 및 개발하였다. Mediapipe에서 신체를 landmark로 검출하게 되는데 이를 scikit-learn 머신러닝 라이브러리를 사용하여 특정 자세를 학습시키고 모델화하여 알람 프로그램에 특정 기능에 조건으로 사용될 수 있게 하였다. scikit-learn은 아나콘다 등과 같은 개발환경 패키지에서 간단하게 이용 가능한데 이 아나콘다는 데이터 분석이나 그래프 그리기 등, 파이썬에 자주 사용되는 라이브러리를 포함한 개발환경이라고 할 수 있다. 하여 본 팀은 ML기반의 영상처리 알람 프로그램을 제작하는데에 있어 이러한 사항들을 파이썬 환경에서 기본적으로 포함되어 제공하는 tkinter GUI툴을 사용하고 추가적으로 인텔에서 개발한 실시간 컴퓨터 비전을 목적으로 한 프로그래밍 라이브러리 OpenCV와 여러 항목을 사용하여 환경을 구축할 수 있도록 연구·개발하였다.

  • PDF

허혈성 뇌졸중의 진단, 치료 및 예후 예측에 대한 기계 학습의 응용: 서술적 고찰 (Machine learning application in ischemic stroke diagnosis, management, and outcome prediction: a narrative review)

  • 은미연;전은태;정진만
    • Journal of Medicine and Life Science
    • /
    • 제20권4호
    • /
    • pp.141-157
    • /
    • 2023
  • Stroke is a leading cause of disability and death. The condition requires prompt diagnosis and treatment. The quality of care provided to patients with stroke can vary depending on the availability of medical resources, which in turn, can affect prognosis. Recently, there has been growing interest in using machine learning (ML) to support stroke diagnosis and treatment decisions based on large medical data sets. Current ML applications in stroke care can be divided into two categories: analysis of neuroimaging data and clinical information-based predictive models. Using ML to analyze neuroimaging data can increase the efficiency and accuracy of diagnoses. Commercial software that uses ML algorithms is already being used in the medical field. Additionally, the accuracy of predictive ML models is improving with the integration of radiomics and clinical data. is expected to be important for improving the quality of care for patients with stroke.

Development of ML and IoT Enabled Disease Diagnosis Model for a Smart Healthcare System

  • Mehra, Navita;Mittal, Pooja
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.1-12
    • /
    • 2022
  • The current progression in the Internet of Things (IoT) and Machine Learning (ML) based technologies converted the traditional healthcare system into a smart healthcare system. The incorporation of IoT and ML has changed the way of treating patients and offers lots of opportunities in the healthcare domain. In this view, this research article presents a new IoT and ML-based disease diagnosis model for the diagnosis of different diseases. In the proposed model, vital signs are collected via IoT-based smart medical devices, and the analysis is done by using different data mining techniques for detecting the possibility of risk in people's health status. Recommendations are made based on the results generated by different data mining techniques, for high-risk patients, an emergency alert will be generated to healthcare service providers and family members. Implementation of this model is done on Anaconda Jupyter notebook by using different Python libraries in it. The result states that among all data mining techniques, SVM achieved the highest accuracy of 0.897 on the same dataset for classification of Parkinson's disease.

트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용 (A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning)

  • 우덕채;문현실;권순범;조윤호
    • 한국IT서비스학회지
    • /
    • 제18권2호
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.