• Title/Summary/Keyword: result predictions

Search Result 403, Processing Time 0.023 seconds

Analysis of streamflow prediction performance by various deep learning schemes

  • Le, Xuan-Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.131-131
    • /
    • 2021
  • Deep learning models, especially those based on long short-term memory (LSTM), have presented their superiority in addressing time series data issues recently. This study aims to comprehensively evaluate the performance of deep learning models that belong to the supervised learning category in streamflow prediction. Therefore, six deep learning models-standard LSTM, standard gated recurrent unit (GRU), stacked LSTM, bidirectional LSTM (BiLSTM), feed-forward neural network (FFNN), and convolutional neural network (CNN) models-were of interest in this study. The Red River system, one of the largest river basins in Vietnam, was adopted as a case study. In addition, deep learning models were designed to forecast flowrate for one- and two-day ahead at Son Tay hydrological station on the Red River using a series of observed flowrate data at seven hydrological stations on three major river branches of the Red River system-Thao River, Da River, and Lo River-as the input data for training, validation, and testing. The comparison results have indicated that the four LSTM-based models exhibit significantly better performance and maintain stability than the FFNN and CNN models. Moreover, LSTM-based models may reach impressive predictions even in the presence of upstream reservoirs and dams. In the case of the stacked LSTM and BiLSTM models, the complexity of these models is not accompanied by performance improvement because their respective performance is not higher than the two standard models (LSTM and GRU). As a result, we realized that in the context of hydrological forecasting problems, simple architectural models such as LSTM and GRU (with one hidden layer) are sufficient to produce highly reliable forecasts while minimizing computation time because of the sequential data nature.

  • PDF

How much change is optimal when a brand is newly rebranded?

  • Chu, Kyounghee;Lee, Doo-Hee;Yeu, Minsun;Park, Sangtae
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.161-186
    • /
    • 2014
  • There are many cases of rebranding and its numbers are growing. However, rebranding is still under research in the academic field, and there is no guideline on the effective way to change brand name. The objective of this paper is to integrate two inconsistent predictions from categorization theory and schema incongruity theory: a negative linear relationship (categorization theory) versus an inverted-U-shape relationship (schema incongruity theory) between brand name incongruity and consumer evaluation into one framework. Specifically, this study examines how the effect of incongruity between an existing brand name and a new brand name (hereafter called "brand name incongruity") on the new brand name attitude differs depending on a consumer's individual characteristics (need for cognition). The experiment demonstrates that consumers with a high need for cognition show a better attitude towards a new brand name when the brand name was rebranded moderately incongruent compared to congruent or extremely incongruent. Thus, the experiment demonstrates that there is an inversed U-shape relationship between brand name incongruity and new brand name evaluation for consumers with a high need for cognition. On the contrary, consumers with a low need for cognition show a better attitude towards a new brand name when the brand name is rebranded congruently compared to incongruent conditions (moderate incongruity and extreme incongruity). This result indicates that there is a negative linear relationship between brand name incongruity and new brand name evaluation. Key theoretical and managerial implications of the present study are as follows. This study integrates two alternative views of research on incongruity evaluation into one framework by demonstrating that need for cognition moderates the relationship between brand name incongruity and consumer evaluation. This present study provides a conceptual basis for understanding consumer evaluation towards a new brand name. Next, though rebranding is a very important decision making of brand management, there is no guideline on how to change a brand name. The findings of this research can suggest which degree of change is optimal when rebranding in order to utilize and strengthen existing brand equity. More specifically, when our target customer has high need for cognition, moderately incongruent rebranding can be optimal, whereas for those with low need for cognition, rebranding in accordance with existing brand name will be optimal.

  • PDF

Mapping Landslide Susceptibility Based on Spatial Prediction Modeling Approach and Quality Assessment (공간예측모형에 기반한 산사태 취약성 지도 작성과 품질 평가)

  • Al, Mamun;Park, Hyun-Su;JANG, Dong-Ho
    • Journal of The Geomorphological Association of Korea
    • /
    • v.26 no.3
    • /
    • pp.53-67
    • /
    • 2019
  • The purpose of this study is to identify the quality of landslide susceptibility in a landslide-prone area (Jinbu-myeon, Gangwon-do, South Korea) by spatial prediction modeling approach and compare the results obtained. For this goal, a landslide inventory map was prepared mainly based on past historical information and aerial photographs analysis (Daum Map, 2008), as well as some field observation. Altogether, 550 landslides were counted at the whole study area. Among them, 182 landslides are debris flow and each group of landslides was constructed in the inventory map separately. Then, the landslide inventory was randomly selected through Excel; 50% landslide was used for model analysis and the remaining 50% was used for validation purpose. Total 12 contributing factors, such as slope, aspect, curvature, topographic wetness index (TWI), elevation, forest type, forest timber diameter, forest crown density, geology, landuse, soil depth, and soil drainage were used in the analysis. Moreover, to find out the co-relation between landslide causative factors and incidents landslide, pixels were divided into several classes and frequency ratio for individual class was extracted. Eventually, six landslide susceptibility maps were constructed using the Bayesian Predictive Discriminant (BPD), Empirical Likelihood Ratio (ELR), and Linear Regression Method (LRM) models based on different category dada. Finally, in the cross validation process, landslide susceptibility map was plotted with a receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC) and tried to extract success rate curve. The result showed that Bayesian, likelihood and linear models were of 85.52%, 85.23%, and 83.49% accuracy respectively for total data. Subsequently, in the category of debris flow landslide, results are little better compare with total data and its contained 86.33%, 85.53% and 84.17% accuracy. It means all three models were reasonable methods for landslide susceptibility analysis. The models have proved to produce reliable predictions for regional spatial planning or land-use planning.

Forecasting Market trends of technologies using Bigdata (빅데이터를 이용한 기술 시장동향 예측)

  • Mi-Seon Choi;Yong-Hwack Cho;Jin-Hwa Kim
    • Journal of Industrial Convergence
    • /
    • v.21 no.10
    • /
    • pp.21-28
    • /
    • 2023
  • As the need for the use of big data increases, various analysis activities using big data, including SNS data, are being carried out in individuals, companies, and countries. However, existing research on predicting technology market trends has been mainly conducted using expert-dependent or patent or literature research-based data, and objective technology prediction using big data is needed. Therefore, this study aims to present a model for predicting future technologies through decision tree analysis, visualization analysis, and percentage analysis with data from social network services (SNS). As a result of the study, percentage analysis was better able to predict positive techniques compared to other analysis results, and visualization analysis was better able to predict negative techniques compared to other analysis results. The decision tree analysis was also able to make meaningful predictions.

Prediction of Draft Force of Moldboard Plow according to Travel Speed in Cohesive Soil using Discrete Element Method (이산요소법을 활용한 점성토 환경에서의 작업 속도에 따른 몰드보드 플라우 견인력 예측)

  • Bo Min Bae;Dae Wi Jung;Dong Hyung Ryu;Jang Hyeon An;Se O Choi;Yeon Soo Kim;Yong Joo Kim
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In the field of agricultural machinery, various on-field tests are conducted to measure design load for optimal design of agricultural equipment. However, field test procedures are costly and time-consuming, and there are many constraints on field soil conditions due to weather, so research on utilizing simulation to overcome these shortcomings is needed. Therefore, this study aimed to model agricultural soils using discrete element method (DEM) software. To simulate draft force, predictions are made according to travel speed and compared to field test results to validate the prediction accuracy. The measured soil properties are used for DEM modeling. In this study, the soil property measurement procedure was designed to measure the physical and mechanical properties. DEM soil model calibration was performed using a virtual vane shear test instead of the repose angle test. The DEM simulation results showed that the prediction accuracy of the draft force was within 4.8% (2.16~6.71%) when compared to the draft force measured by the field test. In addition, it was confirmed that the result was up to 72.51% more accurate than those obtained through theoretical methods for predicting draft force. This study provides useful information for the DEM soil modeling process that considers the working speed from the perspective of agricultural machinery research and it is expected to be utilized in agricultural machinery design research.

Comparative Analysis of Reliability Predictions for Quality Assurance Factors in FIDES (FIDES의 품질 보증 인자에 대한 신뢰도 예측 비교 분석)

  • Cheol-Hwan Youn;Jin-Uk Seo;Seong-Keun Jeong;Hyun-Ung Oh
    • Journal of Aerospace System Engineering
    • /
    • v.18 no.2
    • /
    • pp.21-28
    • /
    • 2024
  • In light of the rapid development of the space industry, there has been increased attention on small satellites. These satellites rely on components that are considered to have lower reliability compared to larger-scale satellites. As a result, predicting reliability becomes even more crucial in this context. Therefore, this study aims to compare three reliability prediction techniques: MIL-HDBK-217F, RiAC-HDBK-217Plus, and FIDES. The goal is to determine a suitable reliability standard specifically for nano-satellites. Furthermore, we have refined the quality assurance factors of the manufacturing company. These factors have been adjusted to be applicable across various industrial sectors, with a particular focus on the selected FIDES prediction standard. This approach ensures that the scoring system accurately reflects the suitability for the aerospace industry. Finally, by implementing this refined system, we confirm the impact of the manufacturer's quality assurance level on the total failure rate.

Prediction of Composition Ratio of DNA Solution from Measurement Data with White Noise Using Neural Network (잡음이 포함된 측정 자료에 대한 신경망의 DNA 용액 조성비 예측)

  • Gyeonghee Kang;Minji Kim;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.118-124
    • /
    • 2024
  • A neural network is utilized for preprocessing of de-noizing in electrocardiogram signals, retinal images, seismic waves, etc. However, the de-noizing process could provoke increase of computational time and distortion of the original signals. In this study, we investigated a neural network architecture to analyze measurement data without additional de-noizing process. From the dynamical behaviors of DNA in aqueous solution, our neural network model aimed to predict the mole fraction of each DNA in the solution. By adding white noise to the dynamics data of DNA artificially, we investigated the effect of the noise to neural network's predictions. As a result, our model was able to predict the DNA mole fraction with an error of O(0.01) when signal-to-noise ratio was O(1). This work can be applied as a efficient artificial intelligence methodology for analyzing DNA related to genetic disease or cancer cells which would be sensitive to background measuring noise.

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

Assessment of the Potential Impact of Climate Change on the Drought in Agricultural Reservoirs under SSP Scenarios (SSP 시나리오를 고려한 농업용 저수지의 이수측면 잠재영향평가)

  • Kim, Siho;Jang, Min-Won;Hwang, Syewoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.2
    • /
    • pp.35-52
    • /
    • 2024
  • This study conducted an assessment of potential impacts on the drought in agricultural reservoirs using the recently proposed SSP (Shared Socioeconomic Pathways) scenarios by IPCC (Intergovernmental Panel on Climate Change). This study assesses the potential impact of climate change on agricultural water resources and infrastructure vulnerability within Gyeongsangnam-do, focusing on 15 agricultural reservoirs. The assessment was based on the KRC (Korea Rural Community Corporation) 1st vulnerability assessment methodology using RCP scenarios for 2021. However, there are limitations due to the necessity for climate impact assessments based on the latest climate information and the uncertainties associated with using a single scenario from national standard scenarios. Therefore, we applied the 13 GCM (General Circulation Model) outputs based on the newly introduced SSP scenarios. Furthermore, due to difficulties in data acquisiton, we reassessed potential impacts by redistributing weights for proxy variables. As a main result, with lower future potential impacts observed in areas with higher precipitation along the southern coast. Overall, the potential impacts increased for all reservoirs as we moved into the future, maintaining their relative rankings, yet showing no significant variability in the far future. Although the overall pattern of potential impacts aligns with previous evaluations, reevaluation under similar conditions with different spatial resolutions emphasizes the critical role of meteorological data spatial resolution in assessments. The results of this study are expected to improve the credibility and accuracy formulation of vulnerability employing more scientific predictions.

Trapezoidal Cyclic Voltammetry as a Baseline for Determining Reverse Peak Current from Cyclic Voltammograms

  • Carla B. Emiliano;Chrystian de O. Bellin;Mauro C. Lopes
    • Journal of Electrochemical Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.405-413
    • /
    • 2024
  • Several techniques for determining the reverse peak current from a cyclic voltammogram (CV) for a reversible system are described in the literature: CV itself as a baseline with long switching potential (Eλ) that serves as a baseline for other CVs, Nicholson equation that uses CV parameters to calculation reverse peak current and linear extrapolation of the current obtained at the switching potential. All methods either present experimental difficulties or large errors in the peak current determination. The paper demonstrates, both theoretically and experimentally, that trapezoidal cyclic voltammetry (TCV) can be used as a baseline to determine anodic peak current (iap) with high accuracy and with a switching potential shorter than that used by CV, as long as Eλ is at least 130 mV away from the cathodic peak. Beyond this value of switching potential the electroactive specie is completely depleted from the electrode surface. Using TCV with Eλ = 0.34 V and a switching time (tλ) of 240 s as a baseline, the determination of the reverse peak current presents a deviation from the expected value of less than 1% for most of the CVs studied (except cases when Eλ is close to the direct potential peak). This result presents better accuracy than the Nicholson equation and the linear extrapolation of the current measured at the switching potential, in addition to presenting a smaller error than that obtained in the acquisition of the experimental current. Furthermore, determining the reverse peak current by extrapolating the linear fit of iap vs. ${\sqrt[1/]{t_{\lambda}}}$ to infinite time gave a reasonable approximation to the expected value. Experiments with aqueous potassium hexacyanoferrate (II) and ferrocene in acetonitrile confirmed the theoretical predictions.