• Title/Summary/Keyword: Machine learning algorithm

Search Result 1,480, Processing Time 0.027 seconds

Machine Learning based Prediction of The Value of Buildings

  • Lee, Woosik;Kim, Namgi;Choi, Yoon-Ho;Kim, Yong Soo;Lee, Byoung-Dai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3966-3991
    • /
    • 2018
  • Due to the lack of visualization services and organic combinations between public and private buildings data, the usability of the basic map has remained low. To address this issue, this paper reports on a solution that organically combines public and private data while providing visualization services to general users. For this purpose, factors that can affect building prices first were examined in order to define the related data attributes. To extract the relevant data attributes, this paper presents a method of acquiring public information data and real estate-related information, as provided by private real estate portal sites. The paper also proposes a pretreatment process required for intelligent machine learning. This report goes on to suggest an intelligent machine learning algorithm that predicts buildings' value pricing and future value by using big data regarding buildings' spatial information, as acquired from a database containing building value attributes. The algorithm's availability was tested by establishing a prototype targeting pilot areas, including Suwon, Anyang, and Gunpo in South Korea. Finally, a prototype visualization solution was developed in order to allow general users to effectively use buildings' value ranking and value pricing, as predicted by intelligent machine learning.

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

FORECASTING GOLD FUTURES PRICES CONSIDERING THE BENCHMARK INTEREST RATES

  • Lee, Donghui;Kim, Donghyun;Yoon, Ji-Hun
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.34 no.2
    • /
    • pp.157-168
    • /
    • 2021
  • This study uses the benchmark interest rate of the Federal Open Market Committee (FOMC) to predict gold futures prices. For the predictions, we used the support vector machine (SVM) (a machine-learning model) and the long short-term memory (LSTM) deep-learning model. We found that the LSTM method is more accurate than the SVM method. Moreover, we applied the Boruta algorithm to demonstrate that the FOMC benchmark interest rates correlate with gold futures.

Application of machine learning in optimized distribution of dampers for structural vibration control

  • Li, Luyu;Zhao, Xuemeng
    • Earthquakes and Structures
    • /
    • v.16 no.6
    • /
    • pp.679-690
    • /
    • 2019
  • This paper presents machine learning methods using Support Vector Machine (SVM) and Multilayer Perceptron (MLP) to analyze optimal damper distribution for structural vibration control. Regarding different building structures, a genetic algorithm based optimization method is used to determine optimal damper distributions that are further used as training samples. The structural features, the objective function, the number of dampers, etc. are used as input features, and the distribution of dampers is taken as an output result. In the case of a few number of damper distributions, multi-class prediction can be performed using SVM and MLP respectively. Moreover, MLP can be used for regression prediction in the case where the distribution scheme is uncountable. After suitable post-processing, good results can be obtained. Numerical results show that the proposed method can obtain the optimized damper distributions for different structures under different objective functions, which achieves better control effect than the traditional uniform distribution and greatly improves the optimization efficiency.

On successive machine learning process for predicting strength and displacement of rectangular reinforced concrete columns subjected to cyclic loading

  • Bu-seog Ju;Shinyoung Kwag;Sangwoo Lee
    • Computers and Concrete
    • /
    • v.32 no.5
    • /
    • pp.513-525
    • /
    • 2023
  • Recently, research on predicting the behavior of reinforced concrete (RC) columns using machine learning methods has been actively conducted. However, most studies have focused on predicting the ultimate strength of RC columns using a regression algorithm. Therefore, this study develops a successive machine learning process for predicting multiple nonlinear behaviors of rectangular RC columns. This process consists of three stages: single machine learning, bagging ensemble, and stacking ensemble. In the case of strength prediction, sufficient prediction accuracy is confirmed even in the first stage. In the case of displacement, although sufficient accuracy is not achieved in the first and second stages, the stacking ensemble model in the third stage performs better than the machine learning models in the first and second stages. In addition, the performance of the final prediction models is verified by comparing the backbone curves and hysteresis loops obtained from predicted outputs with actual experimental data.

Prediction of Multi-Physical Analysis Using Machine Learning (기계학습을 이용한 다중물리해석 결과 예측)

  • Lee, Keun-Myoung;Kim, Kee-Young;Oh, Ung;Yoo, Sung-kyu;Song, Byeong-Suk
    • Journal of IKEEE
    • /
    • v.20 no.1
    • /
    • pp.94-102
    • /
    • 2016
  • This paper proposes a new prediction method to reduce times and labor of repetitive multi-physics simulation. To achieve exact results from the whole simulation processes, complex modeling and huge amounts of time are required. Current multi-physics analysis focuses on the simulation method itself and the simulation environment to reduce times and labor. However this paper proposes an alternative way to reduce simulation times and labor by exploiting machine learning algorithm trained with data set from simulation results. Through comparing each machine learning algorithm, Gaussian Process Regression showed the best performance with under 100 training data and how similar results can be achieved through machine-learning without a complex simulation process. Given trained machine learning algorithm, it's possible to predict the result after changing some features of the simulation model just in a few second. This new method will be helpful to effectively reduce simulation times and labor because it can predict the results before more simulation.

Analysis of Open-Source Hyperparameter Optimization Software Trends

  • Lee, Yo-Seob;Moon, Phil-Joo
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.56-62
    • /
    • 2019
  • Recently, research using artificial neural networks has further expanded the field of neural network optimization and automatic structuring from improving inference accuracy. The performance of the machine learning algorithm depends on how the hyperparameters are configured. Open-source hyperparameter optimization software can be an important step forward in improving the performance of machine learning algorithms. In this paper, we review open-source hyperparameter optimization softwares.

Research Trends in Wi-Fi Performance Improvement in Coexistence Networks with Machine Learning (기계학습을 활용한 이종망에서의 Wi-Fi 성능 개선 연구 동향 분석)

  • Kang, Young-myoung
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.51-59
    • /
    • 2022
  • Machine learning, which has recently innovatively developed, has become an important technology that can solve various optimization problems. In this paper, we introduce the latest research papers that solve the problem of channel sharing in heterogeneous networks using machine learning, analyze the characteristics of mainstream approaches, and present a guide to future research directions. Existing studies have generally adopted Q-learning since it supports fast learning both on online and offline environment. On the contrary, conventional studies have either not considered various coexistence scenarios or lacked consideration for the location of machine learning controllers that can have a significant impact on network performance. One of the powerful ways to overcome these disadvantages is to selectively use a machine learning algorithm according to changes in network environment based on the logical network architecture for machine learning proposed by ITU.

Scheduling Algorithm, Based on Reinforcement Learning for Minimizing Total Tardiness in Unrelated Parallel Machines (이종 병렬설비에서 총납기지연 최소화를 위한 강화학습 기반 일정계획 알고리즘)

  • Tehie Lee;Jae-Gon Kim;Woo-Sik Yoo
    • Journal of the Korea Safety Management & Science
    • /
    • v.25 no.4
    • /
    • pp.131-140
    • /
    • 2023
  • This paper proposes an algorithm for the Unrelated Parallel Machine Scheduling Problem(UPMSP) without setup times, aiming to minimize total tardiness. As an NP-hard problem, the UPMSP is hard to get an optimal solution. Consequently, practical scenarios are solved by relying on operator's experiences or simple heuristic approaches. The proposed algorithm has adapted two methods: a policy network method, based on Transformer to compute the correlation between individual jobs and machines, and another method to train the network with a reinforcement learning algorithm based on the REINFORCE with Baseline algorithm. The proposed algorithm was evaluated on randomly generated problems and the results were compared with those obtained using CPLEX, as well as three scheduling algorithms. This paper confirms that the proposed algorithm outperforms the comparison algorithms, as evidenced by the test results.

A Comparative Study of Prediction Models for College Student Dropout Risk Using Machine Learning: Focusing on the case of N university (머신러닝을 활용한 대학생 중도탈락 위험군의 예측모델 비교 연구 : N대학 사례를 중심으로)

  • So-Hyun Kim;Sung-Hyoun Cho
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.12 no.2
    • /
    • pp.155-166
    • /
    • 2024
  • Purpose : This study aims to identify key factors for predicting dropout risk at the university level and to provide a foundation for policy development aimed at dropout prevention. This study explores the optimal machine learning algorithm by comparing the performance of various algorithms using data on college students' dropout risks. Methods : We collected data on factors influencing dropout risk and propensity were collected from N University. The collected data were applied to several machine learning algorithms, including random forest, decision tree, artificial neural network, logistic regression, support vector machine (SVM), k-nearest neighbor (k-NN) classification, and Naive Bayes. The performance of these models was compared and evaluated, with a focus on predictive validity and the identification of significant dropout factors through the information gain index of machine learning. Results : The binary logistic regression analysis showed that the year of the program, department, grades, and year of entry had a statistically significant effect on the dropout risk. The performance of each machine learning algorithm showed that random forest performed the best. The results showed that the relative importance of the predictor variables was highest for department, age, grade, and residence, in the order of whether or not they matched the school location. Conclusion : Machine learning-based prediction of dropout risk focuses on the early identification of students at risk. The types and causes of dropout crises vary significantly among students. It is important to identify the types and causes of dropout crises so that appropriate actions and support can be taken to remove risk factors and increase protective factors. The relative importance of the factors affecting dropout risk found in this study will help guide educational prescriptions for preventing college student dropout.