• Title/Summary/Keyword: Future failure prediction

Search Result 60, Processing Time 0.027 seconds

Analytical Study on Ductility Index of Reinforced Concrete Flexural Members (철근 콘크리트 휨부재의 연성지수에 관한 해석적 연구)

  • Lee, Jae Hoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.14 no.3
    • /
    • pp.391-402
    • /
    • 1994
  • One of the most important design concept for reinforced concrete structures is to achieve a ductile failure mode, and also moment redistribution for economic design is possible in case that adequate ductility is provided. Flexural ductility index is, therefore, used as a reference for possibility of moment redistribution as well as for prediction of flexural behavior of designed R.C. structures. Ductility index equations, however, provide approximate values due to the linear concrete compressive stress assumption at the tension steel yielding state. Theoretically more exact ductility index is calculated by a numerical analysis with the realistic stress-strain curves for concrete and steel to be compared with the result from tire ductility index equations. Variation of ductility index for the selected variables and the reasonable maximum tension steel ratio for doubly reinforced section are investigated. A moment-curvature curve model is also proposed for future research on moment redistribution.

  • PDF

Statistical Analysis for Path Break-Up Time of Mobile Wireless Networks (이동 무선망의 경로 붕괴시간에 대한 통계적 분석)

  • Ahn, Hong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.5
    • /
    • pp.113-118
    • /
    • 2015
  • Mobile wireless networks have received a lot of attention as a future wireless network due to its rapid deployment without communication infrastructure. In these networks communication path between two arbitrary nodes break down because some links in the path are beyond transmission range($r_0$) due to the mobility of the nodes. The set of total path break down time(${\bigcup}T_i$), which is the union of path break down time of every node pair, can be a good measure of the connectivity of the dynamic mobile wireless network. In this paper we show that the distribution of the total path break down time can be approximated as a exponential probability density function and confirms it through experimental data. Statistical knowledge of break down time enables quantitative prediction of delay, packet loss between two nodes, thus provides confidence in the simulation results of mobile wireless networks.

An Electrochemical Method to Predict Corrosion Rates in Soils

  • Dafter, M.R
    • Corrosion Science and Technology
    • /
    • v.15 no.5
    • /
    • pp.217-225
    • /
    • 2016
  • Linear polarization resistance (LPR) testing of soils has been used extensively by a number of water utilities across Australia for many years now to determine the condition of buried ferrous water mains. The LPR test itself is a relatively simple, inexpensive test that serves as a substitute for actual exhumation and physical inspection of buried water mains to determine corrosion losses. LPR testing results (and the corresponding pit depth estimates) in combination with proprietary pipe failure algorithms can provideauseful predictive tool in determiningthe current and future conditions of an asset. Anumber of LPR tests have been developed on soil by various researchers over the years1), but few have gained widespread commercial use, partly due to the difficulty in replicating the results. This author developed an electrochemical cell that was suitable for LPR soil testing and utilized this cell to test a series of soil samples obtained through an extensive program of field exhumations. The objective of this testing was to examine the relationship between short-term electrochemical testing and long-term in-situ corrosion of buried water mains, utilizing an LPR test that could be robustly replicated. Forty-one soil samples and related corrosion data were obtained from ad hoc condition assessments of buried water mains located throughout the Hunter region of New South Wales, Australia. Each sample was subjected to the electrochemical test developed by the author, and the resulting polarization data were compared with long-term pitting data obtained from each water main. The results of this testing program enabled the author to undertake a comprehensive review of the LPR technique as it is applied to soils and to examine whether correlations can be made between LPR testing results and long-term field corrosion.

The Improvement of Computational Efficiency in KIM by an Adaptive Time-step Algorithm (적응시간 간격 알고리즘을 이용한 KIM의 계산 효율성 개선)

  • Hyun Nam;Suk-Jin Choi
    • Atmosphere
    • /
    • v.33 no.4
    • /
    • pp.331-341
    • /
    • 2023
  • A numerical forecasting models usually predict future states by performing time integration considering fixed static time-steps. A time-step that is too long can cause model instability and failure of forecast simulation, and a time-step that is too short can cause unnecessary time integration calculations. Thus, in numerical models, the time-step size can be determined by the CFL (Courant-Friedrichs-Lewy)-condition, and this condition acts as a necessary condition for finding a numerical solution. A static time-step is defined as using the same fixed time-step for time integration. On the other hand, applying a different time-step for each integration while guaranteeing the stability of the solution in time advancement is called an adaptive time-step. The adaptive time-step algorithm is a method of presenting the maximum usable time-step suitable for each integration based on the CFL-condition for the adaptive time-step. In this paper, the adaptive time-step algorithm is applied for the Korean Integrated Model (KIM) to determine suitable parameters used for the adaptive time-step algorithm through the monthly verifications of 10-day simulations (during January and July 2017) at about 12 km resolution. By comparing the numerical results obtained by applying the 25 second static time-step to KIM in Supercomputer 5 (Nurion), it shows similar results in terms of forecast quality, presents the maximum available time-step for each integration, and improves the calculation efficiency by reducing the number of total time integrations by 19%.

Development of Long-Term Electricity Demand Forecasting Model using Sliding Period Learning and Characteristics of Major Districts (주요 지역별 특성과 이동 기간 학습 기법을 활용한 장기 전력수요 예측 모형 개발)

  • Gong, InTaek;Jeong, Dabeen;Bak, Sang-A;Song, Sanghwa;Shin, KwangSup
    • The Journal of Bigdata
    • /
    • v.4 no.1
    • /
    • pp.63-72
    • /
    • 2019
  • For power energy, optimal generation and distribution plans based on accurate demand forecasts are necessary because it is not recoverable after they have been delivered to users through power generation and transmission processes. Failure to predict power demand can cause various social and economic problems, such as a massive power outage in September 2011. In previous studies on forecasting power demand, ARIMA, neural network models, and other methods were developed. However, limitations such as the use of the national average ambient air temperature and the application of uniform criteria to distinguish seasonality are causing distortion of data or performance degradation of the predictive model. In order to improve the performance of the power demand prediction model, we divided Korea into five major regions, and the power demand prediction model of the linear regression model and the neural network model were developed, reflecting seasonal characteristics through regional characteristics and migration period learning techniques. With the proposed approach, it seems possible to forecast the future demand in short term as well as in long term. Also, it is possible to consider various events and exceptional cases during a certain period.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

A Study on the Prediction Model of the Elderly Depression

  • SEO, Beom-Seok;SUH, Eung-Kyo;KIM, Tae-Hyeong
    • The Journal of Industrial Distribution & Business
    • /
    • v.11 no.7
    • /
    • pp.29-40
    • /
    • 2020
  • Purpose: In modern society, many urban problems are occurring, such as aging, hollowing out old city centers and polarization within cities. In this study, we intend to apply big data and machine learning methodologies to predict depression symptoms in the elderly population early on, thus contributing to solving the problem of elderly depression. Research design, data and methodology: Machine learning techniques used random forest and analyzed the correlation between CES-D10 and other variables, which are widely used worldwide, to estimate important variables. Dependent variables were set up as two variables that distinguish normal/depression from moderate/severe depression, and a total of 106 independent variables were included, including subjective health conditions, cognitive abilities, and daily life quality surveys, as well as the objective characteristics of the elderly as well as the subjective health, health, employment, household background, income, consumption, assets, subjective expectations, and quality of life surveys. Results: Studies have shown that satisfaction with residential areas and quality of life and cognitive ability scores have important effects in classifying elderly depression, satisfaction with living quality and economic conditions, and number of outpatient care in living areas and clinics have been important variables. In addition, the results of a random forest performance evaluation, the accuracy of classification model that classify whether elderly depression or not was 86.3%, the sensitivity 79.5%, and the specificity 93.3%. And the accuracy of classification model the degree of elderly depression was 86.1%, sensitivity 93.9% and specificity 74.7%. Conclusions: In this study, the important variables of the estimated predictive model were identified using the random forest technique and the study was conducted with a focus on the predictive performance itself. Although there are limitations in research, such as the lack of clear criteria for the classification of depression levels and the failure to reflect variables other than KLoSA data, it is expected that if additional variables are secured in the future and high-performance predictive models are estimated and utilized through various machine learning techniques, it will be able to consider ways to improve the quality of life of senior citizens through early detection of depression and thus help them make public policy decisions.

The Study on the Characteristics of Mode I Crack for Cross-ply Carbon/Epoxy Composite Laminates Based on Stress Fields (응력장을 이용한 직교적층 탄소섬유/에폭시 복합재 적층판의 모드 I 균열 특성 연구)

  • Kang, Min-Song;Jeon, Min-Hyeok;Kim, In-Gul;Woo, Kyeong-Sik
    • Composites Research
    • /
    • v.32 no.6
    • /
    • pp.327-334
    • /
    • 2019
  • The delamination is a special mode of failure occurring in composite laminates. Several numerical studies with finite element analysis have been carried out on the delamination behavior of unidirectional composite laminates. On the other hand, the fracture for the multi-directional composite laminates may occur not only along the resin-fiber interface between plies known as interply or interlaminar fracture but also within a ply known as interyarn or intralaminar fracture accompanied by matrix cracking and fiber bridging. In addition, interlaminar and intralaminar cracks appear at irregular proportions and intralaminar cracks proceeded at arbitrary angle. The probabilistic analysis method for the prediction of crack growth behavior within a layer is more advantageous than the deterministic analysis method. In this paper, we analyze the crack path when the mode I load is applied to the cross-ply carbon/epoxy composite laminates and collect and analyze the probability data to be used as the basis of the probabilistic analysis in the future. Two criteria for the theoretical analysis of the crack growth direction were proposed by analyzing the stress field at the crack tip of orthotropic materials. Using the proposed method, the crack growth directions of the cross-ply carbon/epoxy laminates were analyzed qualitatively and quantitatively and compared with experimental results.

Estimation of Structural Deterioration of Sewer using Markov Chain Model (마르코프 연쇄 모델을 이용한 하수관로의 구조적 노후도 추정)

  • Kang, Byong Jun;Yoo, Soon Yu;Zhang, Chuanli;Park, Kyoo Hong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.421-431
    • /
    • 2023
  • Sewer deterioration models can offer important information on prediction of future condition of the asset to decision makers in their implementing sewer pipe networks management program. In this study, Markov chain model was used to estimate sewer deterioration trend based on the historical structural condition assessment data obtained by CCTV inspection. The data used in this study were limited to Hume pipe with diameter of 450 mm and 600 mm in three sub-catchment areas in city A, which were collected by CCTV inspection projects performed in 1998-1999 and 2010-2011. As a result, it was found that sewers in sub-catchment area EM have deteriorated faster than those in other two sub-catchments. Various main defects were to generate in 29% of 450 mm sewers and 38% of 600 mm in 35 years after the installation, while serious failure in 62% of 450 mm sewers and 74% of 600 mm in 100 years after the installation in sub-catchment area EM. In sub-catchment area SN, main defects were to generate in 26% of 450 mm sewers and 35% of 600 mm in 35 years after the installation, while in sub-catchment area HK main defects were to generate in 27% of 450 mm sewers and 37% of 600 mm in 35 years after the installation. Larger sewer pipes of 600 mm were found to deteriorate faster than smaller sewer pipes of 450 mm by about 12 years. Assuming that the percentage of main defects generation could be set as 40% to estimate the life expectancy of the sewers, it was estimated as 60 years in sub-catchment area SN, 42 years in sub-catchment area EM, 59 years in sub-catchment area HK for 450 mm sewer pipes, respectively. For 600 mm sewer pipes, on the other hand, it was estimated as 43 years, 34 years, 39 years in sub-catchment areas SN, EM, and HK, respectively.