• Title/Summary/Keyword: mean-squared error

Search Result 716, Processing Time 0.029 seconds

Tidal Level Prediction of Busan Port using Long Short-Term Memory (Long Short-Term Memory를 이용한 부산항 조위 예측)

  • Kim, Hae Lim;Jeon, Yong-Ho;Park, Jae-Hyung;Yoon, Han-sam
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.4
    • /
    • pp.469-476
    • /
    • 2022
  • This study developed a Recurrent Neural Network model implemented through Long Short-Term Memory (LSTM) that generates long-term tidal level data at Busan Port using tide observation data. The tide levels in Busan Port were predicted by the Korea Hydrographic and Oceanographic Administration (KHOA) using the tide data observed at Busan New Port and Tongyeong as model input data. The model was trained for one month in January 2019, and subsequently, the accuracy was calculated for one year from February 2019 to January 2020. The constructed model showed the highest performance with a correlation coefficient of 0.997 and a root mean squared error of 2.69 cm when the tide time series of Busan New Port and Tongyeong were inputted together. The study's finding reveal that long-term tidal level data prediction of an arbitrary port is possible using the deep learning recurrent neural network model.

A novel radioactive particle tracking algorithm based on deep rectifier neural network

  • Dam, Roos Sophia de Freitas;dos Santos, Marcelo Carvalho;do Desterro, Filipe Santana Moreira;Salgado, William Luna;Schirru, Roberto;Salgado, Cesar Marques
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2334-2340
    • /
    • 2021
  • Radioactive particle tracking (RPT) is a minimally invasive nuclear technique that tracks a radioactive particle inside a volume of interest by means of a mathematical location algorithm. During the past decades, many algorithms have been developed including ones based on artificial intelligence techniques. In this study, RPT technique is applied in a simulated test section that employs a simplified mixer filled with concrete, six scintillator detectors and a137Cs radioactive particle emitting gamma rays of 662 keV. The test section was developed using MCNPX code, which is a mathematical code based on Monte Carlo simulation, and 3516 different radioactive particle positions (x,y,z) were simulated. Novelty of this paper is the use of a location algorithm based on a deep learning model, more specifically a 6-layers deep rectifier neural network (DRNN), in which hyperparameters were defined using a Bayesian optimization method. DRNN is a type of deep feedforward neural network that substitutes the usual sigmoid based activation functions, traditionally used in vanilla Multilayer Perceptron Networks, for rectified activation functions. Results show the great accuracy of the DRNN in a RPT tracking system. Root mean squared error for x, y and coordinates of the radioactive particle is, respectively, 0.03064, 0.02523 and 0.07653.

Development of Diameter Growth Models by Thinning Intensity of Planted Quercus glauca Thunb. Stands

  • Jung, Su Young;Lee, Kwang Soo;Kim, Hyun Soo
    • Journal of People, Plants, and Environment
    • /
    • v.24 no.6
    • /
    • pp.629-638
    • /
    • 2021
  • Background and objective: This study was conducted to develop diameter growth models for thinned Quercus glauca Thunb. (QGT) stands to inform production goals for treatment and provide the information necessary for the systematic management of this stands. Methods: This study was conducted on QGT stands, of which initial thinning was completed in 2013 to develop a treatment system. To analyze the tree growth and trait response for each thinning treatment, forestry surveys were conducted in 2014 and 2021, and a one-way analysis of variance (ANOVA) was executed. In addition, non-linear least squares regression of the PROC NLIN procedure was used to develop an optimal diameter growth model. Results: Based on growth and trait analyses, the height and height-to-diameter (H/D) ratio were not different according to treatment plot (p > .05). For the diameter of basal height (DBH), the heavy thinning (HT) treatment plot was significantly larger than the control plot (p < .05). As a result of the development of diameter growth models by treatment plot, the mean squared error (MSE) of the Gompertz polymorphic equation (control: 2.2381, light thinning: 0.8478, and heavy thinning: 0.8679) was the lowest in all treatment plots, and the Shapiro-Wilk statistic was found to follow a normal distribution (p > .95), so it was selected as an equation fit for the diameter growth model. Conclusion: The findings of this study provide basic data for the systematic management of Quercus glauca Thunb. stands. It is necessary to construct permanent sample plots (PSP) that consider stand status, location conditions, and climatic environments.

Prediction of Fabric Drape Using Artificial Neural Networks (인공신경망을 이용한 드레이프성 예측)

  • Lee, Somin;Yu, Dongjoo;Shin, Bona;Youn, Seonyoung;Shim, Myounghee;Yun, Changsang
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.6
    • /
    • pp.978-985
    • /
    • 2021
  • This study aims to propose a prediction model for the drape coefficient using artificial neural networks and to analyze the nonlinear relationship between the drape properties and physical properties of fabrics. The study validates the significance of each factor affecting the fabric drape through multiple linear regression analysis with a sample size of 573. The analysis constructs a model with an adjusted R2 of 77.6%. Seven main factors affect the drape coefficient: Grammage, extruded length values for warp and weft (mwarp, mweft), coefficients of quadratic terms in the tensile-force quadratic graph in the warp, weft, and bias directions (cwarp, cweft, cbias), and force required for 1% tension in the warp direction (fwarp). Finally, an artificial neural network was created using seven selected factors. The performance was examined by increasing the number of hidden neurons, and the most suitable number of hidden neurons was found to be 8. The mean squared error was .052, and the correlation coefficient was .863, confirming a satisfactory model. The developed artificial neural network model can be used for engineering and high-quality clothing design. It is expected to provide essential data for clothing appearance, such as the fabric drape.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

Building of cyanobacteria forecasting model using transformer (Transformer를 이용한 유해남조 발생 예측 모델 구축)

  • Hankyu Lee;Jin Hwi Kim;Seohyun Byeon;Jae-Ki Shin;Yongeun Park
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.515-515
    • /
    • 2023
  • 팔당호는 북한강과 남한강이 합류하여 생성된 호소로 수도인 서울과 수도권인 경기도 동부지역의 물 공급을 담당하는 중요한 상수원이다. 이러한 팔당호에서 유해남조 발생은 상수원수 활용과 직접적으로 연관되어 있어 신속하고 정확한 관리 및 예측이 필요하다. 본 연구에서는 안전한 상수원 활용을 위해, 딥러닝 기법을 이용하여 유해남조 사전 예측 모델을 구축하고자 하였다. 모델 입력 변수는 2012년부터 2021년까지 10년 동안의 주간 팔당호 수질(수온, DO, BOD, COD, Chl-a, TN, TP, pH, 전기전도도, TDN, NH4N, NO3N, TDP, PO4P, 부유물질)과 수문(유입량, 총방류량), 기상 정보(평균기온, 최저기온, 최고기온, 일 강수량, 평균풍속, 평균 상대습도, 합계일조량), 그리고 북한강과 남한강 유입지점의 남조 세포 수를 사용하였다. 모델 출력 변수는 수질, 수문, 기상 요인으로 인한 남조의 성장 발현 시기를 고려하여 1주 후의 댐앞 남조 세포수를 사용하였다. 사용한 딥러닝 기법은 최근 주목받고 있는 Temporal Fusion Transformer (TFT)를 사용하였다. 모델 훈련용 데이터와 테스트용 데이터는 각각 8:2의 비율로 나누었으며, 검증용 데이터는 훈련용 데이터 내에서 훈련 데이터와 검증 데이터를 6:4 비율로 분배하였다. Lookback은 5로 설정하였고, 이는 주단위 데이터로 구성된 데이터세트의 특성을 반영한 것이다. 모델의 성능은 실측값과 예측값을 토대로 R-square와 Root Mean Squared Error (RMSE)를 계산하여 평가하였다. 모델학습은 총 154번 반복 진행되었으며, 이 중 성능이 가장 준수한 시점은 54번째 반복 시점으로 훈련손실 대비 검증손실이 가장 양호한 값을 나타냈다(훈련손실:0.443, 검증손실 0.380). R-square는 훈련단계에서 0.681, 검증단계에서 0.654였고, 테스트 단계에서 0.606으로 산출되었다. RMSE는 훈련단계에서 0.614(㎍/L), 검증단계에서 0.617(㎍/L), 테스트 단계에서 0.773(㎍/L)였다. 모델에 사용한 데이터세트가 주간 데이터라는 특성을 고려하면, 소규모 데이터를 사용하였음에도 본 연구에서 구축한 모델의 성능은 양호하다고 평가할 수 있다. 향후 연구에서 데이터세트를 보강하고 모델을 업데이트한다면, 모델의 성능을 더욱더 개선할 수 있을 것으로 기대된다.

  • PDF

Verifying Applicability of Multi-Timescale Rainfall Data from CHIRPS Satellite (다중시간 규모의 CHIRPS 위성 강우자료에 대한 활용성 검증)

  • Minseok Kim;Kyunghun Kim;Seong Cheol Shin;Soojun Kim;Hung Soo Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.192-192
    • /
    • 2023
  • 우량계는 강우 자료를 수집하는 전통적인 방법 중 하나로, 연속적이고 직접적인 설치가 가능하다. 하지만 지형적 특성에 영향을 받아 강우량을 과소 측정하는 문제점이 있다. 이러한 문제를 해결하기 위해 국지적인 호우, 강우 이동 및 강우 상황 등을 파악할 수 있는 레이더를 이용한 강우 측정이 활용된다. 하지만 레이더 기반 측정 또한 우량계와 마찬가지로 과소 측정하는 문제점이 있다. 측정 한계를 극복하기 위해 최근에는 위성 기반 강우 자료를 사용하고 있다. 위성 기반의 강우 자료는 측정이 어려운 장소에서도 강우량의 수집이 가능하며, 지표 변화를 관측하여 강우 측정의 정확도를 높일 수 있다. 고화질 위성 자료인 CHIRPS (Climate Hazards Group InfraRed Precipitation with Stations) 자료는 미국 국제개발처, 항공우주국, 해양 대기청의 지원으로 1980년부터 현재까지 전 지구적 (50°S-50°N, 180°E-180°W) 0.05° × 0.05°의 해상도를 가진 강우량 데이터를 개발하였다. 본 연구에서는 전국 54개 ASOS (Automated Synpotic Observing System)에서 관측한 월 단위 및 일 단위 강우 자료를 기준으로 CHIRPS 강우 자료를 비교하였다. 또한, 다른 위성 강우 자료들 (APHRODITE (Asian Precipitation Highly Resolved Observation Data Integration Towards Evaluation), CMORPH (Climate Prediction Cneter morphing method))과도 비교하여 국내 적용성을 확인하였다. 강우 자료의 정확도를 비교하기 위해서 Box-plot, RMSE (Root Mean Squared Error) 등을 산정하였으며, 강우 발생 일을 비교하고자 오차 행렬을 활용하였다. 비교 결과를 통해서 CHIRPS 강우 자료가 다른 위성 강우 자료들에 비해서 국내 적용성이 높은 것을 확인할 수 있었으며, 추후 국내 수문학 연구에서 기초자료로서 활용될 수 있을 것으로 판단된다.

  • PDF

An Application of Machine Learning in Retail for Demand Forecasting

  • Muhammad Umer Farooq;Mustafa Latif;Waseemullah;Mirza Adnan Baig;Muhammad Ali Akhtar;Nuzhat Sana
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.1-7
    • /
    • 2023
  • Demand prediction is an essential component of any business or supply chain. Large retailers need to keep track of tens of millions of items flows each day to ensure smooth operations and strong margins. The demand prediction is in the epicenter of this planning tornado. For business processes in retail companies that deal with a variety of products with short shelf life and foodstuffs, forecast accuracy is of the utmost importance due to the shifting demand pattern, which is impacted by an environment of dynamic and fast response. All sectors strive to produce the ideal quantity of goods at the ideal time, but for retailers, this issue is especially crucial as they also need to effectively manage perishable inventories. In light of this, this research aims to show how Machine Learning approaches can help with demand forecasting in retail and future sales predictions. This will be done in two steps. One by using historic data and another by using open data of weather conditions, fuel, Consumer Price Index (CPI), holidays, any specific events in that area etc. Several machine learning algorithms were applied and compared using the r-squared and mean absolute percentage error (MAPE) assessment metrics. The suggested method improves the effectiveness and quality of feature selection while using a small number of well-chosen features to increase demand prediction accuracy. The model is tested with a one-year weekly dataset after being trained with a two-year weekly dataset. The results show that the suggested expanded feature selection approach provides a very good MAPE range, a very respectable and encouraging value for anticipating retail demand in retail systems.

An Application of Machine Learning in Retail for Demand Forecasting

  • Muhammad Umer Farooq;Mustafa Latif;Waseem;Mirza Adnan Baig;Muhammad Ali Akhtar;Nuzhat Sana
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.210-216
    • /
    • 2023
  • Demand prediction is an essential component of any business or supply chain. Large retailers need to keep track of tens of millions of items flows each day to ensure smooth operations and strong margins. The demand prediction is in the epicenter of this planning tornado. For business processes in retail companies that deal with a variety of products with short shelf life and foodstuffs, forecast accuracy is of the utmost importance due to the shifting demand pattern, which is impacted by an environment of dynamic and fast response. All sectors strive to produce the ideal quantity of goods at the ideal time, but for retailers, this issue is especially crucial as they also need to effectively manage perishable inventories. In light of this, this research aims to show how Machine Learning approaches can help with demand forecasting in retail and future sales predictions. This will be done in two steps. One by using historic data and another by using open data of weather conditions, fuel, Consumer Price Index (CPI), holidays, any specific events in that area etc. Several machine learning algorithms were applied and compared using the r-squared and mean absolute percentage error (MAPE) assessment metrics. The suggested method improves the effectiveness and quality of feature selection while using a small number of well-chosen features to increase demand prediction accuracy. The model is tested with a one-year weekly dataset after being trained with a two-year weekly dataset. The results show that the suggested expanded feature selection approach provides a very good MAPE range, a very respectable and encouraging value for anticipating retail demand in retail systems.

Mapping Poverty Distribution of Urban Area using VIIRS Nighttime Light Satellite Imageries in D.I Yogyakarta, Indonesia

  • KHAIRUNNISAH;Arie Wahyu WIJAYANTO;Setia, PRAMANA
    • Asian Journal of Business Environment
    • /
    • v.13 no.2
    • /
    • pp.9-20
    • /
    • 2023
  • Purpose: This study aims to map the spatial distribution of poverty using nighttime light satellite images as a proxy indicator of economic activities and infrastructure distribution in D.I Yogyakarta, Indonesia. Research design, data, and methodology: This study uses official poverty statistics (National Socio-economic Survey (SUSENAS) and Poverty Database 2015) to compare satellite imagery's ability to identify poor urban areas in D.I Yogyakarta. National Socioeconomic Survey (SUSENAS), as poverty statistics at the macro level, uses expenditure to determine the poor in a region. Poverty Database 2015 (BDT 2015), as poverty statistics at the micro-level, uses asset ownership to determine the poor population in an area. Pearson correlation is used to identify the correlation among variables and construct a Support Vector Regression (SVR) model to estimate the poverty level at a granular level of 1 km x 1 km. Results: It is found that macro poverty level and moderate annual nighttime light intensity have a Pearson correlation of 74 percent. It is more significant than micro poverty, with the Pearson correlation being 49 percent in 2015. The SVR prediction model can achieve the root mean squared error (RMSE) of up to 8.48 percent on SUSENAS 2020 poverty data.Conclusion: Nighttime light satellite imagery data has potential benefits as alternative data to support regional poverty mapping, especially in urban areas. Using satellite imagery data is better at predicting regional poverty based on expenditure than asset ownership at the micro-level. Light intensity at night can better describe the use of electricity consumption for economic activities at night, which is captured in spending on electricity financing compared to asset ownership.