• Title/Summary/Keyword: Neural Network Modeling

Search Result 749, Processing Time 0.221 seconds

Development of Artificial Intelligence Joint Model for Hybrid Finite Element Analysis (하이브리드 유한요소해석을 위한 인공지능 조인트 모델 개발)

  • Jang, Kyung Suk;Lim, Hyoung Jun;Hwang, Ji Hye;Shin, Jaeyoon;Yun, Gun Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.773-782
    • /
    • 2020
  • The development of joint FE models for deep learning neural network (DLNN)-based hybrid FEA is presented. Material models of bolts and bearings in the front axle of tractor, showing complex behavior induced by various tightening conditions, were replaced with DLNN models. Bolts are modeled as one-dimensional Timoshenko beam elements with six degrees of freedom, and bearings as three-dimensional solid elements. Stress-strain data were extracted from all elements after finite element analysis subjected to various load conditions, and DLNN for bolts and bearing were trained with Tensorflow. The DLNN-based joint models were implemented in the ABAQUS user subroutines where stresses from the next increment are updated and the algorithmic tangent stiffness matrix is calculated. Generalization of the trained DLNN in the FE model was verified by subjecting it to a new loading condition. Finally, the DLNN-based FEA for the front axle of the tractor was conducted and the feasibility was verified by comparing with results of a static structural experiment of the actual tractor.

Analysis of Contribution of Climate and Cultivation Management Variables Affecting Orchardgrass Production (오차드그라스의 생산량에 영향을 미치는 기후 및 재배관리의 기여도 분석)

  • Moonju Kim;Ji Yung Kim;Mu-Hwan Jo;Kyungil Sung
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • This study aimed to confirm the importance ratio of climate and management variables on production of orchardgrass in Korea (1982-2014). For the climate, the mean temperature in January (MTJ, ℃), lowest temperature in January (LTJ, ℃), growing days 0 to 5 (GD 1, day), growing days 5 to 25 (GD 2, day), Summer depression days (SSD, day), rainfall days (RD, day), accumulated rainfall (AR, mm), and sunshine duration (SD, hr) were considered. For the management, the establishment period (EP, 0-6 years) and number of cutting (NC, 2nd-5th) were measured. The importance ratio on production of orchardgrass was estimated using the neural network model with the perceptron method. It was performed by SPSS 26.0 (IBM Corp., Chicago). As a result, EP was the most important variable (100%), followed by RD (82.0%), AR (79.1%), NC (69.2%), LTJ (66.2%), GD 2 (63.3%), GD 1 (61.6%), SD (58.1%), SSD (50.8%) and MTJ (41.8%). It implies that EP, RD, AR, and NC were more important than others. Since the annual rainfall in Korea is exceed the required amount for the growth and development of orchardgrass, the damage caused by heavy rainfall exceeding the appropriate level could be reduced through drainage management. It means that, when cultivating orchardgrass, factors that can be controlled were relatively important. Although it is difficult to interpret the specific effect of climates on production due to neural networking modeling, in the future, this study is expected to be useful in production prediction and damage estimation by climate change by selecting major factors.

Data-driven Modeling for Valve Size and Type Prediction Using Machine Learning (머신 러닝을 이용한 밸브 사이즈 및 종류 예측 모델 개발)

  • Chanho Kim;Minshick Choi;Chonghyo Joo;A-Reum Lee;Yun Gun;Sungho Cho;Junghwan Kim
    • Korean Chemical Engineering Research
    • /
    • v.62 no.3
    • /
    • pp.214-224
    • /
    • 2024
  • Valves play an essential role in a chemical plant such as regulating fluid flow and pressure. Therefore, optimal selection of the valve size and type is essential task. Valve size and type have been selected based on theoretical formulas about calculating valve sizing coefficient (Cv). However, this approach has limitations such as requiring expert knowledge and consuming substantial time and costs. Herein, this study developed a model for predicting valve sizes and types using machine learning. We developed models using four algorithms: ANN, Random Forest, XGBoost, and Catboost and model performances were evaluated using NRMSE & R2 score for size prediction and F1 score for type prediction. Additionally, a case study was conducted to explore the impact of phases on valve selection, using four datasets: total fluids, liquids, gases, and steam. As a result of the study, for valve size prediction, total fluid, liquid, and gas dataset demonstrated the best performance with Catboost (Based on R2, total: 0.99216, liquid: 0.98602, gas: 0.99300. Based on NRMSE, total: 0.04072, liquid: 0.04886, gas: 0.03619) and steam dataset showed the best performance with RandomForest (R2: 0.99028, NRMSE: 0.03493). For valve type prediction, Catboost outperformed all datasets with the highest F1 scores (total: 0.95766, liquids: 0.96264, gases: 0.95770, steam: 1.0000). In Engineering Procurement Construction industry, the proposed fluid-specific machine learning-based model is expected to guide the selection of suitable valves based on given process conditions and facilitate faster decision-making.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

Satellite-Based Cabbage and Radish Yield Prediction Using Deep Learning in Kangwon-do (딥러닝을 활용한 위성영상 기반의 강원도 지역의 배추와 무 수확량 예측)

  • Hyebin Park;Yejin Lee;Seonyoung Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1031-1042
    • /
    • 2023
  • In this study, a deep learning model was developed to predict the yield of cabbage and radish, one of the five major supply and demand management vegetables, using satellite images of Landsat 8. To predict the yield of cabbage and radish in Gangwon-do from 2015 to 2020, satellite images from June to September, the growing period of cabbage and radish, were used. Normalized difference vegetation index, enhanced vegetation index, lead area index, and land surface temperature were employed in this study as input data for the yield model. Crop yields can be effectively predicted using satellite images because satellites collect continuous spatiotemporal data on the global environment. Based on the model developed previous study, a model designed for input data was proposed in this study. Using time series satellite images, convolutional neural network, a deep learning model, was used to predict crop yield. Landsat 8 provides images every 16 days, but it is difficult to acquire images especially in summer due to the influence of weather such as clouds. As a result, yield prediction was conducted by splitting June to July into one part and August to September into two. Yield prediction was performed using a machine learning approach and reference models , and modeling performance was compared. The model's performance and early predictability were assessed using year-by-year cross-validation and early prediction. The findings of this study could be applied as basic studies to predict the yield of field crops in Korea.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

Removal of Seabed Multiples in Seismic Reflection Data using Machine Learning (머신러닝을 이용한 탄성파 반사법 자료의 해저면 겹반사 제거)

  • Nam, Ho-Soo;Lim, Bo-Sung;Kweon, Il-Ryong;Kim, Ji-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.168-177
    • /
    • 2020
  • Seabed multiple reflections (seabed multiples) are the main cause of misinterpretations of primary reflections in both shot gathers and stack sections. Accordingly, seabed multiples need to be suppressed throughout data processing. Conventional model-driven methods, such as prediction-error deconvolution, Radon filtering, and data-driven methods, such as the surface-related multiple elimination technique, have been used to attenuate multiple reflections. However, the vast majority of processing workflows require time-consuming steps when testing and selecting the processing parameters in addition to computational power and skilled data-processing techniques. To attenuate seabed multiples in seismic reflection data, input gathers with seabed multiples and label gathers without seabed multiples were generated via numerical modeling using the Marmousi2 velocity structure. The training data consisted of normal-moveout-corrected common midpoint gathers fed into a U-Net neural network. The well-trained model was found to effectively attenuate the seabed multiples according to the image similarity between the prediction result and the target data, and demonstrated good applicability to field data.

Face Detection in Color Images Based on Skin Region Segmentation and Neural Network (피부 영역 분할과 신경 회로망에 기반한 칼라 영상에서 얼굴 검출)

  • Lee, Young-Sook;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.1-11
    • /
    • 2006
  • Many research demonstrations and commercial applications have been tried to develop face detection and recognition systems. Human face detection plays an important role in applications such as access control and video surveillance, human computer interface, identity authentication, etc. There are some special problems such as a face connected with background, faces connected via the skin color, and a face divided into several small parts after skin region segmentation in generally. It can be allowed many face detection techniques to solve the first and second problems. However, it is not easy to detect a face divided into several parts of regions for reason of different illumination conditions in the third problem. Therefore, we propose an efficient modified skin segmentation algorithm to solve this problem because the typical region segmentation algorithm can not be used to. Our algorithm detects skin regions over the entire image, and then generates face candidate regions using our skin segmentation algorithm For each face candidate, we implement the procedure of region merging for divided regions in order to make a region using adjacency between homogeneous regions. We utilize various different searching window sizes to detect different size faces and a face detection classifier based on a back-propagation algorithm in order to verify whether the searching window contains a face or not.

  • PDF