• Title/Summary/Keyword: Soft computing.

Search Result 209, Processing Time 0.025 seconds

Utilizing the GOA-RF hybrid model, predicting the CPT-based pile set-up parameters

  • Zhao, Zhilong;Chen, Simin;Zhang, Dengke;Peng, Bin;Li, Xuyang;Zheng, Qian
    • Geomechanics and Engineering
    • /
    • v.31 no.1
    • /
    • pp.113-127
    • /
    • 2022
  • The undrained shear strength of soil is considered one of the engineering parameters of utmost significance in geotechnical design methods. In-situ experiments like cone penetration tests (CPT) have been used in the last several years to estimate the undrained shear strength depending on the characteristics of the soil. Nevertheless, the majority of these techniques rely on correlation presumptions, which may lead to uneven accuracy. This research's general aim is to extend a new united soft computing model, which is a combination of random forest (RF) with grasshopper optimization algorithm (GOA) to the pile set-up parameters' better approximation from CPT, based on two different types of data as inputs. Data type 1 contains pile parameters, and data type 2 consists of soil properties. The contribution of this article is that hybrid GOA - RF for the first time, was suggested to forecast the pile set-up parameter from CPT. In order to do this, CPT data and related bore log data were gathered from 70 various locations across Louisiana. With an R2 greater than 0.9098, which denotes the permissible relationship between measured and anticipated values, the results demonstrated that both models perform well in forecasting the set-up parameter. It is comprehensible that, in the training and testing step, the model with data type 2 has finer capability than the model using data type 1, with R2 and RMSE are 0.9272 and 0.0305 for the training step and 0.9182 and 0.0415 for the testing step. All in all, the models' results depict that the A parameter could be forecasted with adequate precision from the CPT data with the usage of hybrid GOA - RF models. However, the RF model with soil features as input parameters results in a finer commentary of pile set-up parameters.

Predictive model for the shear strength of concrete beams reinforced with longitudinal FRP bars

  • Alzabeebee, Saif;Dhahir, Moahmmed K.;Keawsawasvong, Suraparb
    • Structural Engineering and Mechanics
    • /
    • v.84 no.2
    • /
    • pp.143-154
    • /
    • 2022
  • Corrosion of steel reinforcement is considered as the main cause of concrete structures deterioration, especially those under humid environmental conditions. Hence, fiber reinforced polymer (FRP) bars are being increasingly used as a replacement for conventional steel owing to their non-corrodible characteristics. However, predicting the shear strength of beams reinforced with FRP bars still challenging due to the lack of robust shear theory. Thus, this paper aims to develop an explicit data driven based model to predict the shear strength of FRP reinforced beams using multi-objective evolutionary polynomial regression analysis (MOGA-EPR) as data driven models learn the behavior from the input data without the need to employee a theory that aid the derivation, and thus they have an enhanced accuracy. This study also evaluates the accuracy of predictive models of shear strength of FRP reinforced concrete beams employed by different design codes by calculating and comparing the values of the mean absolute error (MAE), root mean square error (RMSE), mean (𝜇), standard deviation of the mean (𝜎), coefficient of determination (R2), and percentage of prediction within error range of ±20% (a20-index). Experimental database has been developed and employed in the model learning, validation, and accuracy examination. The statistical analysis illustrated the robustness of the developed model with MAE, RMSE, 𝜇, 𝜎, R2, and a20-index of 14.6, 20.8, 1.05, 0.27, 0.85, and 0.61, respectively for training data and 10.4, 14.1, 0.98, 0.25, 0.94, and 0.60, respectively for validation data. Furthermore, the developed model achieved much better predictions than the standard predictive models as it scored lower MAE, RMSE, and 𝜎, and higher R2 and a20-index. The new model can be used in future with confidence in optimized designs as its accuracy is higher than standard predictive models.

A GMDH-based estimation model for axial load capacity of GFRP-RC circular columns

  • Mohammed Berradia;El Hadj Meziane;Ali Raza;Mohamed Hechmi El Ouni;Faisal Shabbir
    • Steel and Composite Structures
    • /
    • v.49 no.2
    • /
    • pp.161-180
    • /
    • 2023
  • In the previous research, the axial compressive capacity models for the glass fiber-reinforced polymer (GFRP)-reinforced circular concrete compression elements restrained with GFRP helix were put forward based on small and noisy datasets by considering a limited number of parameters portraying less accuracy. Consequently, it is important to recommend an accurate model based on a refined and large testing dataset that considers various parameters of such components. The core objective and novelty of the current research is to suggest a deep learning model for the axial compressive capacity of GFRP-reinforced circular concrete columns restrained with a GFRP helix utilizing various parameters of a large experimental dataset to give the maximum precision of the estimates. To achieve this aim, a test dataset of 61 GFRP-reinforced circular concrete columns restrained with a GFRP helix has been created from prior studies. An assessment of 15 diverse theoretical models is carried out utilizing different statistical coefficients over the created dataset. A novel model utilizing the group method of data handling (GMDH) has been put forward. The recommended model depicted good effectiveness over the created dataset by assuming the axial involvement of GFRP main bars and the confining effectiveness of transverse GFRP helix and depicted the maximum precision with MAE = 195.67, RMSE = 255.41, and R2 = 0.94 as associated with the previously recommended equations. The GMDH model also depicted good effectiveness for the normal distribution of estimates with only a 2.5% discrepancy from unity. The recommended model can accurately calculate the axial compressive capacity of FRP-reinforced concrete compression elements that can be considered for further analysis and design of such components in the field of structural engineering.

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

Design of an Efficient Concurrency Control Algorithms for Real-time Database Systems (실시간 데이터베이스 시스템을 위한 효율적인 병행실행제어 알고리즘 설계)

  • Lee Seok-Jae;Park Sae-Mi;Kang Tae-ho;Yoo Jae-Soo
    • Journal of Internet Computing and Services
    • /
    • v.5 no.1
    • /
    • pp.67-84
    • /
    • 2004
  • Real-time database systems (RTDBS) are database systems whose transactions are associated with timing constraints such as deadlines. Therefore transaction needs to be completed by a certain deadline. Besides meeting timing constraints, a RTDBS needs to observe data consistency constraints as well. That is to say, unlike a conventional database system, whose main objective is to provide fast average response time, RTDBS may be evaluated based on how often transactions miss their deadline, the average lateness or tardiness of late transactions, the cost incurred in transactions missing their deadlines. Therefore, in RTDBS, transactions should be scheduled according to their criticalness and tightness of their deadlines, even If this means sacrificing fairness and system throughput, And It always must guarantee preceding process of the transaction with the higher priority. In this paper, we propose an efficient real-time scheduling algorithm (Multi-level EFDF) that alleviates problems of the existing real-time scheduling algorithms, a real-time concurrency control algorithm(2PL-FT) for firm and soft real-time transactions. And we compare the proposed 2PL F[ with AVCC in terms of the restarting ratio and the deadline missing ratio of transactions. We show through experiments that our algorithms achieve good performance over the other existing methods proposed earlier.

  • PDF

New Soil Classification System Using Cone Penetration Test (콘관입시험결과를 이용한 새로운 흙분류 방법의 개발)

  • Kim, Chan-Hong;Im, Jong-Chul;Kim, Young-Sang;Joo, No-Ah
    • Journal of the Korean Geotechnical Society
    • /
    • v.24 no.10
    • /
    • pp.57-70
    • /
    • 2008
  • The advantage of piezocone penetration test is a guarantee of continuous data, which is a source of reliable interpretation of target soil layer. Many researches have been carried out f3r several decades and several classification charts have been developed to classify in-situ soil from the cone penetration test result. Since most present classification charts or methods were developed based on the data which were compiled over the world except Korea, they should be verified to be feasible for Korean soil. Furthermore, sometimes their charts provide different soil classification results according to the different input parameters. However, unfortunately, revision of those charts is quite difficult or almost impossible. In this research a new soil classification model is proposed by using fuzzy C-mean clustering and neuro-fuzzy theory based on the 5371 CPT results and soil logging results compiled from 17 local sites around Korea. Proposed neuro-fuzzy soil classification model was verified by comparing the classification results f3r new data, which were not used during learning process of neuro-fuzzy model, with real soil log. Efficiency of proposed neuro-fuzzy model was compared with other soft computing classification models and Robertson method for new data.

Measuring the Impact of Competition on Pricing Behaviors in a Two-Sided Market

  • Kim, Minkyung;Song, Inseong
    • Asia Marketing Journal
    • /
    • v.16 no.1
    • /
    • pp.35-69
    • /
    • 2014
  • The impact of competition on pricing has been studied in the context of counterfactual merger analyses where expected optimal prices in a hypothetical monopoly are compared with observed prices in an oligopolistic market. Such analyses would typically assume static decision making by consumers and firms and thus have been applied mostly to data obtained from consumer packed goods such as cereal and soft drinks. However such static modeling approach is not suitable when decision makers are forward looking. When it comes to the markets for durable products with indirect network effects, consumer purchase decisions and firm pricing decisions are inherently dynamic as they take into account future states when making purchase and pricing decisions. Researchers need to take into account the dynamic aspects of decision making both in the consumer side and in the supplier side for such markets. Firms in a two-sided market typically subsidize one side of the market to exploit the indirect network effect. Such pricing behaviors would be more prevalent in competitive markets where firms would try to win over the battle for standard. While such qualitative expectation on the relationship between pricing behaviors and competitive structures could be easily formed, little empirical studies have measured the extent to which the distinct pricing structure in two-sided markets depends on the competitive structure of the market. This paper develops an empirical model to measure the impact of competition on optimal pricing of durable products under indirect network effects. In order to measure the impact of exogenously determined competition among firms on pricing, we compare the equilibrium prices in the observed oligopoly market to those in a hypothetical monopoly market. In computing the equilibrium prices, we account for the forward looking behaviors of consumers and supplier. We first estimate a demand function that accounts for consumers' forward-looking behaviors and indirect network effects. And then, for the supply side, the pricing equation is obtained as an outcome of the Markov Perfect Nash Equilibrium in pricing. In doing so, we utilize numerical dynamic programming techniques. We apply our model to a data set obtained from the U.S. video game console market. The video game console market is considered a prototypical case of two-sided markets in which the platform typically subsidizes one side of market to expand the installed base anticipating larger revenues in the other side of market resulting from the expanded installed base. The data consist of monthly observations of price, hardware unit sales and the number of compatible software titles for Sony PlayStation and Nintendo 64 from September 1996 to August 2002. Sony PlayStation was released to the market a year before Nintendo 64 was launched. We compute the expected equilibrium price path for Nintendo 64 and Playstation for both oligopoly and for monopoly. Our analysis reveals that the price level differs significantly between two competition structures. The merged monopoly is expected to set prices higher by 14.8% for Sony PlayStation and 21.8% for Nintendo 64 on average than the independent firms in an oligopoly would do. And such removal of competition would result in a reduction in consumer value by 43.1%. Higher prices are expected for the hypothetical monopoly because the merged firm does not need to engage in the battle for industry standard. This result is attributed to the distinct property of a two-sided market that competing firms tend to set low prices particularly at the initial period to attract consumers at the introductory stage and to reinforce their own networks and eventually finally to dominate the market.

  • PDF

Smart Electric Mobility Operating System Integrated with Off-Grid Solar Power Plants in Tanzania: Vision and Trial Run (탄자니아의 태양광 발전소와 통합된 전기 모빌리티 운영 시스템 : 비전과 시범운행)

  • Rhee, Hyop-Seung;Im, Hyuck-Soon;Manongi, Frank Andrew;Shin, Young-In;Song, Ho-Won;Jung, Woo-Kyun;Ahn, Sung-Hoon
    • Journal of Appropriate Technology
    • /
    • v.7 no.2
    • /
    • pp.127-135
    • /
    • 2021
  • To respond to the threat of global warming, countries around the world are promoting the spread of renewable energy and reduction of carbon emissions. In accordance with the United Nation's Sustainable Development Goal to combat climate change and its impacts, global automakers are pushing for a full transition to electric vehicles within the next 10 years. Electric vehicles can be a useful means for reducing carbon emissions, but in order to reduce carbon generated in the stage of producing electricity for charging, a power generation system using eco-friendly renewable energy is required. In this study, we propose a smart electric mobility operating system integrated with off-grid solar power plants established in Tanzania, Africa. By applying smart monitoring and communication functions based on Arduino-based computing devices, information such as remaining battery capacity, battery status, location, speed, altitude, and road conditions of an electric vehicle or electric motorcycle is monitored. In addition, we present a scenario that communicates with the surrounding independent solar power plant infrastructure to predict the drivable distance and optimize the charging schedule and route to the destination. The feasibility of the proposed system was verified through test runs of electric motorcycles. In considering local environmental characteristics in Tanzania for the operation of the electric mobility system, factors such as eco-friendliness, economic feasibility, ease of operation, and compatibility should be weighed. The smart electric mobility operating system proposed in this study can be an important basis for implementing the SDGs' climate change response.