• Title/Summary/Keyword: critical design parameter

Search Result 225, Processing Time 0.041 seconds

Development of a novel fatigue damage model for Gaussian wide band stress responses using numerical approximation methods

  • Jun, Seock-Hee;Park, Jun-Bum
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.12 no.1
    • /
    • pp.755-767
    • /
    • 2020
  • A significant development has been made on a new fatigue damage model applicable to Gaussian wide band stress response spectra using numerical approximation methods such as data processing, time simulation, and regression analysis. So far, most of the alternative approximate models provide slightly underestimated or overestimated damage results compared with the rain-flow counting distribution. A more reliable approximate model that can minimize the damage differences between exact and approximate solutions is required for the practical design of ships and offshore structures. The present paper provides a detailed description of the development process of a new fatigue damage model. Based on the principle of the Gaussian wide band model, this study aims to develop the best approximate fatigue damage model. To obtain highly accurate damage distributions, this study deals with some prominent research findings, i.e., the moment of rain-flow range distribution MRR(n), the special bandwidth parameter μk, the empirical closed form model consisting of four probability density functions, and the correction factor QC. Sequential prerequisite data processes, such as creation of various stress spectra, extraction of stress time history, and the rain-flow counting stress process, are conducted so that these research findings provide much better results. Through comparison studies, the proposed model shows more reliable and accurate damage distributions, very close to those of the rain-flow counting solution. Several significant achievements and findings obtained from this study are suggested. Further work is needed to apply the new developed model to crack growth prediction under a random stress process in view of the engineering critical assessment of offshore structures. The present developed formulation and procedure also need to be extended to non-Gaussian wide band processes.

Neural network based numerical model updating and verification for a short span concrete culvert bridge by incorporating Monte Carlo simulations

  • Lin, S.T.K.;Lu, Y.;Alamdari, M.M.;Khoa, N.L.D.
    • Structural Engineering and Mechanics
    • /
    • v.81 no.3
    • /
    • pp.293-303
    • /
    • 2022
  • As infrastructure ages and traffic load increases, serious public concerns have arisen for the well-being of bridges. The current health monitoring practice focuses on large-scale bridges rather than short span bridges. However, it is critical that more attention should be given to these behind-the-scene bridges. The relevant information about the construction methods and as-built properties are most likely missing. Additionally, since the condition of a bridge has unavoidably changed during service, due to weathering and deterioration, the material properties and boundary conditions would also have changed since its construction. Therefore, it is not appropriate to continue using the design values of the bridge parameters when undertaking any analysis to evaluate bridge performance. It is imperative to update the model, using finite element (FE) analysis to reflect the current structural condition. In this study, a FE model is established to simulate a concrete culvert bridge in New South Wales, Australia. That model, however, contains a number of parameter uncertainties that would compromise the accuracy of analytical results. The model is therefore updated with a neural network (NN) optimisation algorithm incorporating Monte Carlo (MC) simulation to minimise the uncertainties in parameters. The modal frequency and strain responses produced by the updated FE model are compared with the frequency and strain values on-site measured by sensors. The outcome indicates that the NN model updating incorporating MC simulation is a feasible and robust optimisation method for updating numerical models so as to minimise the difference between numerical models and their real-world counterparts.

Estimating the unconfined compression strength of low plastic clayey soils using gene-expression programming

  • Muhammad Naqeeb Nawaz;Song-Hun Chong;Muhammad Muneeb Nawaz;Safeer Haider;Waqas Hassan;Jin-Seop Kim
    • Geomechanics and Engineering
    • /
    • v.33 no.1
    • /
    • pp.1-9
    • /
    • 2023
  • The unconfined compression strength (UCS) of soils is commonly used either before or during the construction of geo-structures. In the pre-design stage, UCS as a mechanical property is obtained through a laboratory test that requires cumbersome procedures and high costs from in-situ sampling and sample preparation. As an alternative way, the empirical model established from limited testing cases is used to economically estimate the UCS. However, many parameters affecting the 1D soil compression response hinder employing the traditional statistical analysis. In this study, gene expression programming (GEP) is adopted to develop a prediction model of UCS with common affecting soil properties. A total of 79 undisturbed soil samples are collected, of which 54 samples are utilized for the generation of a predictive model and 25 samples are used to validate the proposed model. Experimental studies are conducted to measure the unconfined compression strength and basic soil index properties. A performance assessment of the prediction model is carried out using statistical checks including the correlation coefficient (R), the root mean square error (RMSE), the mean absolute error (MAE), the relatively squared error (RSE), and external criteria checks. The prediction model has achieved excellent accuracy with values of R, RMSE, MAE, and RSE of 0.98, 10.01, 7.94, and 0.03, respectively for the training data and 0.92, 19.82, 14.56, and 0.15, respectively for the testing data. From the sensitivity analysis and parametric study, the liquid limit and fine content are found to be the most sensitive parameters whereas the sand content is the least critical parameter.

A Comparative Study Between High and Low Infiltration Soils as Filter Media in Low Impact Development Structures

  • Guerra, Heidi B.;Geronimo, Franz Kevin;Reyes, Nash Jett;Jeon, Minsu;Choi, Hyeseon;Kim, Youngchul;Kim, Lee-Hyung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.130-130
    • /
    • 2021
  • The increasing effect of urbanization has been more apparent through flooding and downstream water quality especially from heavy rainfalls. In response, stormwater runoff management solutions have focused on runoff volume reduction and treatment through infiltration. However, there are areas with low infiltration soils or are experiencing more dry days and even drought. In this study, a lab-scale infiltration system was used to compare the applicability of two types of soil as base layer in gravel-filled infiltration systems with emphasis on runoff capture and suspended solids removal. The two types of soils used were sandy soil representing a high infiltration system and clayey soil representing a low infiltration system. Findings showed that infiltration rates increased with the water depth above the gravel-soil interface indicating that the available depth for water storage affects this parameter. Runoff capture in the high infiltration system is more affected by rainfall depth and inflow rates as compared to that in the low infiltration system. Based on runoff capture and pollutant removal analysis, a media depth of at least 0.4 m for high infiltration systems and 1 m for low infiltration systems is required to capture and treat a 10-mm rainfall in Korea. A maximum infiltration rate of 200 mm/h was also found to be ideal to provide enough retention time for pollutant removal. Moreover, it was revealed that low infiltration systems are more susceptible to horizontal flows and that the length of the structure may be more critical that the depth in this condition.

  • PDF

Power Decoupling Control Method of Grid-Forming Converter: Review

  • Hyeong-Seok Lee;Yeong-Jun Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.221-229
    • /
    • 2023
  • Recently, Grid-forming(GFM) converter, which offers features such as virtual inertia, damping, black start capability, and islanded mode operation in power systems, has gained significant attention. However, in low-voltage microgrids(MG), it faces challenges due to the coupling phenomenon between active and reactive power caused by the low line impedance X/R ratio and a non-negligible power angle. This power coupling issue leads to stability and performance degradation, inaccurate power sharing, and control parameter design problems for GFM converters. Therefore, this paper serves as a review study on not only control methods associated with GFM converters but also power decoupling techniques. The aim is to introduce promising control methods and enhance accessibility to future research activities by providing a critical review of power decoupling methods. Consequently, by facilitating easy access for future researchers to the study of power decoupling methods, this work is expected to contribute to the expansion of distributed power generation.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Effects of Shore Stiffness and Concrete Cracking on Slab Construction Load I: Theory (슬래브의 시공하중에 대한 동바리 강성 및 슬래브 균열의 영향 I: 이론)

  • Hwang, Hyeon-Jong;Park, Hong-Gun;Hong, Geon-Ho;Im, Ju-Hyeuk;Kim, Jae-Yo
    • Journal of the Korea Concrete Institute
    • /
    • v.22 no.1
    • /
    • pp.41-50
    • /
    • 2010
  • Long-term floor deflection caused by excessive construction load became a critical issue for the design of concrete slabs, as a flat plate is becoming popular for tall buildings. To estimate the concrete cracking and deflection of an early age slab, the construction load should be accurately evaluated. The magnitude of construction load acting on a slab is affected by various design parameters. Most of existing methods for estimating construction load addressed only the effects of the construction period per story, material properties of early age concrete, and the number of shored floors. In the present study, in addition to these parameter, the effects of shore stiffness and concrete cracking on construction load were numerically studied. Based on the result, a simplified method for estimating construction load was developed. In the proposed method, the calculation of construction load is divided to two steps: 1)Onset of concrete placement at a top slab. 2)Removal of shoring. At each step, the construction load increment is distributed to the floor slabs according to the ratio of slab stiffness to shore stiffness. The proposed method was compared with existing methods. In a companion paper, the proposed method will be verified by the comparison with the measurements of actual construction loads.

Evaluation on Clamping force of High Strength Bolts By Temperature Parameter (온도변수에 따른 고력볼트 체결력 평가)

  • Nah, Hwan Seon;Lee, Hyeon Ju;Kim, Kang Seok;Kim, Jin Ho;Kim, Woo Bum
    • Journal of Korean Society of Steel Construction
    • /
    • v.20 no.3
    • /
    • pp.399-407
    • /
    • 2008
  • The clamping of torque shear bolt is based on KS B 2819. It was misunderstood that the tension force of the TS bolt was induced generally at the break of pin-tail specified. However, the clamping forces on slip critical connections do not often meet the intended tension, as it considerably varies due to torque coefficient dependent on temperature variables despite the break of the pin tail. In this study, the tension of torque shear bolts were compared with two types of high-strength hexagon bolts by temperature parameters from ${-10^{\circ}C}$ to ${50^{\circ}C}$. Torque shear bolts showed that the average clamping force increased to 20kN as the temperature increased. In case of galvanized high-strength hexagon bolts, the average clamping forces at $0^{\circ}C$, $20^{\circ}C$, $50^{\circ}C$ were recorded over standard bolt tension, 178kN, and the worst standard deviation was 50kN. In case of high-strength hexagon bolts, ave rage clamping forces increased as the temperature went up, and the worst standard deviation was 33kN lower than that of galvanized high-strength hexagon bolts. As for the turn-of-the-nut method, at nut rotation of ${90^{\circ}}$, two types of high-strength hexagon bolts did not met the intended design bolt in tension, 162kN.it is neccessary to re-evaluate the range of turn of nut, ${120^{\circ}{\pm}30^{\circ}}$.

Total reference-free displacements for condition assessment of timber railroad bridges using tilt

  • Ozdagli, Ali I.;Gomez, Jose A.;Moreu, Fernando
    • Smart Structures and Systems
    • /
    • v.20 no.5
    • /
    • pp.549-562
    • /
    • 2017
  • The US railroad network carries 40% of the nation's total freight. Railroad bridges are the most critical part of the network infrastructure and, therefore, must be properly maintained for the operational safety. Railroad managers inspect bridges by measuring displacements under train crossing events to assess their structural condition and prioritize bridge management and safety decisions accordingly. The displacement of a railroad bridge under train crossings is one parameter of interest to railroad bridge owners, as it quantifies a bridge's ability to perform safely and addresses its serviceability. Railroad bridges with poor track conditions will have amplified displacements under heavy loads due to impacts between the wheels and rail joints. Under these circumstances, vehicle-track-bridge interactions could cause excessive bridge displacements, and hence, unsafe train crossings. If displacements during train crossings could be measured objectively, owners could repair or replace less safe bridges first. However, data on bridge displacements is difficult to collect in the field as a fixed point of reference is required for measurement. Accelerations can be used to estimate dynamic displacements, but to date, the pseudo-static displacements cannot be measured using reference-free sensors. This study proposes a method to estimate total transverse displacements of a railroad bridge under live train loads using acceleration and tilt data at the top of the exterior pile bent of a standard timber trestle, where train derailment due to excessive lateral movement is the main concern. Researchers used real bridge transverse displacement data under train traffic from varying bridge serviceability levels. This study explores the design of a new bridge deck-pier experimental model that simulates the vibrations of railroad bridges under traffic using a shake table for the input of train crossing data collected from the field into a laboratory model of a standard timber railroad pile bent. Reference-free sensors measured both the inclination angle and accelerations of the pile cap. Various readings are used to estimate the total displacements of the bridge using data filtering. The estimated displacements are then compared to the true responses of the model measured with displacement sensors. An average peak error of 10% and a root mean square error average of 5% resulted, concluding that this method can cost-effectively measure the total displacement of railroad bridges without a fixed reference.