• Title/Summary/Keyword: MachineLearning

Search Result 5,605, Processing Time 0.031 seconds

On successive machine learning process for predicting strength and displacement of rectangular reinforced concrete columns subjected to cyclic loading

  • Bu-seog Ju;Shinyoung Kwag;Sangwoo Lee
    • Computers and Concrete
    • /
    • v.32 no.5
    • /
    • pp.513-525
    • /
    • 2023
  • Recently, research on predicting the behavior of reinforced concrete (RC) columns using machine learning methods has been actively conducted. However, most studies have focused on predicting the ultimate strength of RC columns using a regression algorithm. Therefore, this study develops a successive machine learning process for predicting multiple nonlinear behaviors of rectangular RC columns. This process consists of three stages: single machine learning, bagging ensemble, and stacking ensemble. In the case of strength prediction, sufficient prediction accuracy is confirmed even in the first stage. In the case of displacement, although sufficient accuracy is not achieved in the first and second stages, the stacking ensemble model in the third stage performs better than the machine learning models in the first and second stages. In addition, the performance of the final prediction models is verified by comparing the backbone curves and hysteresis loops obtained from predicted outputs with actual experimental data.

Trends in image processing techniques applied to corrosion detection and analysis (부식 검출과 분석에 적용한 영상 처리 기술 동향)

  • Beomsoo Kim;Jaesung Kwon;Jeonghyeon Yang
    • Journal of the Korean institute of surface engineering
    • /
    • v.56 no.6
    • /
    • pp.353-370
    • /
    • 2023
  • Corrosion detection and analysis is a very important topic in reducing costs and preventing disasters. Recently, image processing techniques have been widely applied to corrosion identification and analysis. In this work, we briefly introduces traditional image processing techniques and machine learning algorithms applied to detect or analyze corrosion in various fields. Recently, machine learning, especially CNN-based algorithms, have been widely applied to corrosion detection. Additionally, research on applying machine learning to region segmentation is very actively underway. The corrosion is reddish and brown in color and has a very irregular shape, so a combination of techniques that consider color and texture, various mathematical techniques, and machine learning algorithms are used to detect and analyze corrosion. We present examples of the application of traditional image processing techniques and machine learning to corrosion detection and analysis.

Identifying the Effects of Repeated Tasks in an Apartment Construction Project Using Machine Learning Algorithm (기계적 학습의 알고리즘을 이용하여 아파트 공사에서 반복 공정의 효과 비교에 관한 연구)

  • Kim, Hyunjoo
    • Journal of KIBIM
    • /
    • v.6 no.4
    • /
    • pp.35-41
    • /
    • 2016
  • Learning effect is an observation that the more times a task is performed, the less time is required to produce the same amount of outcomes. The construction industry heavily relies on repeated tasks where the learning effect is an important measure to be used. However, most construction durations are calculated and applied in real projects without considering the learning effects in each of the repeated activities. This paper applied the learning effect to the repeated activities in a small sized apartment construction project. The result showed that there was about 10 percent of difference in duration (one approach of the total duration with learning effects in 41 days while the other without learning effect in 36.5 days). To make the comparison between the two approaches, a large number of BIM based computer simulations were generated and useful patterns were recognized using machine learning algorithm named Decision Tree (See5). Machine learning is a data-driven approach for pattern recognition based on observational evidence.

ACCELERATION OF MACHINE LEARNING ALGORITHMS BY TCHEBYCHEV ITERATION TECHNIQUE

  • LEVIN, MIKHAIL P.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.1
    • /
    • pp.15-28
    • /
    • 2018
  • Recently Machine Learning algorithms are widely used to process Big Data in various applications and a lot of these applications are executed in run time. Therefore the speed of Machine Learning algorithms is a critical issue in these applications. However the most of modern iteration Machine Learning algorithms use a successive iteration technique well-known in Numerical Linear Algebra. But this technique has a very low convergence, needs a lot of iterations to get solution of considering problems and therefore a lot of time for processing even on modern multi-core computers and clusters. Tchebychev iteration technique is well-known in Numerical Linear Algebra as an attractive candidate to decrease the number of iterations in Machine Learning iteration algorithms and also to decrease the running time of these algorithms those is very important especially in run time applications. In this paper we consider the usage of Tchebychev iterations for acceleration of well-known K-Means and SVM (Support Vector Machine) clustering algorithms in Machine Leaning. Some examples of usage of our approach on modern multi-core computers under Apache Spark framework will be considered and discussed.

Trend Analysis of Korea Papers in the Fields of 'Artificial Intelligence', 'Machine Learning' and 'Deep Learning' ('인공지능', '기계학습', '딥 러닝' 분야의 국내 논문 동향 분석)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.4
    • /
    • pp.283-292
    • /
    • 2020
  • Artificial intelligence, which is one of the representative images of the 4th industrial revolution, has been highly recognized since 2016. This paper analyzed domestic paper trends for 'Artificial Intelligence', 'Machine Learning', and 'Deep Learning' among the domestic papers provided by the Korea Academic Education and Information Service. There are approximately 10,000 searched papers, and word count analysis, topic modeling and semantic network is used to analyze paper's trends. As a result of analyzing the extracted papers, compared to 2015, in 2016, it increased 600% in the field of artificial intelligence, 176% in machine learning, and 316% in the field of deep learning. In machine learning, a support vector machine model has been studied, and in deep learning, convolutional neural networks using TensorFlow are widely used in deep learning. This paper can provide help in setting future research directions in the fields of 'artificial intelligence', 'machine learning', and 'deep learning'.

Research Trend on Machine Learning Healthcare Based on Keyword Frequency and Centrality Analysis : Focusing on the United States, the United Kingdom, Korea (키워드 빈도 및 중심성 분석 기반의 머신러닝 헬스케어 연구 동향 : 미국·영국·한국을 중심으로)

  • Lee Taekkyeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.149-163
    • /
    • 2023
  • In this study we analyze research trends on machine learning healthcare based on papers from the United States, the United Kingdom, and Korea. In Elsevier's Scopus, we collected 3425 papers related to machine learning healthcare published from 2018 to 2022. Keyword frequency and centrality analysis were conducted using the abstracts of the collected papers. We identified keywords with high frequency of appearance by calculating keyword frequency and found central research keywords through the centrality analysis by country. Through the analysis results, research related to machine learning, deep learning, healthcare, and the covid virus was conducted as the most central and highly mediating research in each country. As the implication, studies related to electronic health information-based treatment, natural language processing, and privacy in Korea have lower degree centrality and betweenness centrality than those of the United States and the United Kingdom. Thus, various convergence research applied with machine learning is needed for these fields.

Recent advances in deep learning-based side-channel analysis

  • Jin, Sunghyun;Kim, Suhri;Kim, HeeSeok;Hong, Seokhie
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.292-304
    • /
    • 2020
  • As side-channel analysis and machine learning algorithms share the same objective of classifying data, numerous studies have been proposed for adapting machine learning to side-channel analysis. However, a drawback of machine learning algorithms is that their performance depends on human engineering. Therefore, recent studies in the field focus on exploiting deep learning algorithms, which can extract features automatically from data. In this study, we survey recent advances in deep learning-based side-channel analysis. In particular, we outline how deep learning is applied to side-channel analysis, based on deep learning architectures and application methods. Furthermore, we describe its properties when using different architectures and application methods. Finally, we discuss our perspective on future research directions in this field.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.

Income prediction of apple and pear farmers in Chungnam area by automatic machine learning with H2O.AI

  • Hyundong, Jang;Sounghun, Kim
    • Korean Journal of Agricultural Science
    • /
    • v.49 no.3
    • /
    • pp.619-627
    • /
    • 2022
  • In Korea, apples and pears are among the most important agricultural products to farmers who seek to earn money as income. Generally, farmers make decisions at various stages to maximize their income but they do not always know exactly which option will be the best one. Many previous studies were conducted to solve this problem by predicting farmers' income structure, but researchers are still exploring better approaches. Currently, machine learning technology is gaining attention as one of the new approaches for farmers' income prediction. The machine learning technique is a methodology using an algorithm that can learn independently through data. As the level of computer science develops, the performance of machine learning techniques is also improving. The purpose of this study is to predict the income structure of apples and pears using the automatic machine learning solution H2O.AI and to present some implications for apple and pear farmers. The automatic machine learning solution H2O.AI can save time and effort compared to the conventional machine learning techniques such as scikit-learn, because it works automatically to find the best solution. As a result of this research, the following findings are obtained. First, apple farmers should increase their gross income to maximize their income, instead of reducing the cost of growing apples. In particular, apple farmers mainly have to increase production in order to obtain more gross income. As a second-best option, apple farmers should decrease labor and other costs. Second, pear farmers also should increase their gross income to maximize their income but they have to increase the price of pears rather than increasing the production of pears. As a second-best option, pear farmers can decrease labor and other costs.