• Title/Summary/Keyword: Modelling Error

Search Result 278, Processing Time 0.026 seconds

Development of an integrated machine learning model for rheological behaviours and compressive strength prediction of self-compacting concrete incorporating environmental-friendly materials

  • Pouryan Hadi;KhodaBandehLou Ashkan;Hamidi Peyman;Ashrafzadeh Fedra
    • Structural Engineering and Mechanics
    • /
    • v.86 no.2
    • /
    • pp.181-195
    • /
    • 2023
  • To predict the rheological behaviours along with the compressive strength of self-compacting concrete that incorporates environmentally friendly ingredients as cement substitutes, a comparative evaluation of machine learning methods is conducted. To model four parameters, slump flow diameter, L-box ratio, V-funnel time, as well as compressive strength at 28 days-a complete mix design dataset from available pieces of literature is gathered and used to construct the suggested machine learning standards, SVM, MARS, and Mp5-MT. Six input variables-the amount of binder, the percentage of SCMs, the proportion of water to the binder, the amount of fine and coarse aggregates, and the amount of superplasticizer are grouped in a particular pattern. For optimizing the hyper-parameters of the MARS model with the lowest possible prediction error, a gravitational search algorithm (GSA) is required. In terms of the correlation coefficient for modelling slump flow diameter, L-box ratio, V-funnel duration, and compressive strength, the prediction results showed that MARS combined with GSA could improve the accuracy of the solo MARS model with 1.35%, 11.1%, 2.3%, as well as 1.07%. By contrast, Mp5-MT often demonstrates greater identification capability and more accurate prediction in comparison to MARS-GSA, and it may be regarded as an efficient approach to forecasting the rheological behaviors and compressive strength of SCC in infrastructure practice.

Discrete element modeling of strip footing on geogrid-reinforced soil

  • Sarfarazi, Vahab;Tabaroei, Abdollah;Asgari, Kaveh
    • Geomechanics and Engineering
    • /
    • v.29 no.4
    • /
    • pp.435-449
    • /
    • 2022
  • In this paper, unreinforced and geogrid-reinforced soil foundations were modeled by discrete element method and this performed under surface strip footing loads. The effects of horizontal position of geogrid, vertical position, thickness, number, confining pressure have been investigated on the footing settlement and propagation of tensile force along the geogrids. Also, interaction between rectangular tunnel and strip footing with and without presence of geogrid layer has been analyzed. Experimental results of the literature were used to validation of relationships between the numerically achieved footing pressure-settlement for foundations of reinforced and unreinforced soil. Models and micro input parameters which used in the numerical modelling of reinforced and unreinforced soil tunnel were similar to parameters which were used in soil foundations. Model dimension was 1000 mm* 600 mm. Normal and shear stiffness of soils were 5*105 and 2.5 *105 N/m, respectively. Normal and shear stiffness of geogrid were 1*109 and 1*109 N/m, respectively. Loading rate was 0.001 mm/sec. Micro input parameters used in numerical simulation gain by try and error. In addition of the quantitative tensile force propagation along the geogrids, the footing settlements were visualized. Due to collaboration of three layers of geogrid reinforcements the bearing capacity of the reinforced soil tunnel was greatly improved. In such practical reinforced soil formations, the qualitative displacement propagations of soil particles in the soil tunnel and the quantitative vertical displacement propagations along the soil layers/geogrids represented the geogrid reinforcing impacts too.

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.

Numerical Study on the Observational Error of Sea-Surface Winds at leodo Ocean Research Station (수치해석을 이용한 이어도 종합해양과학기지의 해상풍 관측 오차 연구)

  • Yim Jin-Woo;Lee Kyung-Rok;Shim Jae-Seol;Kim Chong-Am
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.18 no.3
    • /
    • pp.189-197
    • /
    • 2006
  • The influence of leodo Ocean Research Station structure to surrounding atmospheric flow is carefully investigated using CFD techniques. Moreover, the validation works of computational results are performed by the comparison with the observed data of leodo Ocean Research station. In this paper, we performed 3-dimensional CAD modelling of the station, generated the grid system for numerical analysis and carried out flow analyses using Navier-Stokes equations coupled with two-equation turbulence model. For suitable free stream conditions of wind speed and direction, the interference of the research station structure on the flow field is predicted. Beside, the computational results are benchmarked by observed data to confirm the accuracy of measured date and reliable data range of each measuring position according to the wind direction. Through the results of this research, now the quantitative evaluation of the error range of interfered gauge data is possible, which is expected to be applied to provide base data of accurate sea surface wind around research stations.

The Application of Adaptive Network-based Fuzzy Inference System (ANFIS) for Modeling the Hourly Runoff in the Gapcheon Watershed (적응형 네트워크 기반 퍼지추론 시스템을 적용한 갑천유역의 홍수유출 모델링)

  • Kim, Ho Jun;Chung, Gunhui;Lee, Do-Hun;Lee, Eun Tae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.5B
    • /
    • pp.405-414
    • /
    • 2011
  • The adaptive network-based fuzzy inference system (ANFIS) which had a success for time series prediction and system control was applied for modeling the hourly runoff in the Gapcheon watershed. The ANFIS used the antecedent rainfall and runoff as the input. The ANFIS was trained by varying the various simulation factors such as mean areal rainfall estimation, the number of input variables, the type of membership function and the number of membership function. The root mean square error (RMSE), mean peak runoff error (PE), and mean peak time error (TE) were used for validating the ANFIS simulation. The ANFIS predicted runoff was in good agreement with the measured runoff and the applicability of ANFIS for modelling the hourly runoff appeared to be good. The forecasting ability of ANFIS up to the maximum 8 lead hour was investigated by applying the different input structure to ANFIS model. The accuracy of ANFIS for predicting the hourly runoff was reduced as the forecasting lead hours increased. The long-term predictability of ANFIS for forecasting the hourly runoff at longer lead hours appeared to be limited. The ANFIS might be useful for modeling the hourly runoff and has an advantage over the physically based models because the model construction of ANFIS based on only input and output data is relatively simple.

Stereo Image-based 3D Modelling Algorithm through Efficient Extraction of Depth Feature (효율적인 깊이 특징 추출을 이용한 스테레오 영상 기반의 3차원 모델링 기법)

  • Ha, Young-Su;Lee, Heng-Suk;Han, Kyu-Phil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.10
    • /
    • pp.520-529
    • /
    • 2005
  • A feature-based 3D modeling algorithm is presented in this paper. Since conventional methods use depth-based techniques, they need much time for the image matching to extract depth information. Even feature-based methods have less computation load than that of depth-based ones, the calculation of modeling error about whole pixels within a triangle is needed in feature-based algorithms. It also increase the computation time. Therefore, the proposed algorithm consists of three phases, which are an initial 3D model generation, model evaluation, and model refinement phases, in order to acquire an efficient 3D model. Intensity gradients and incremental Delaunay triangulation are used in the Initial model generation. In this phase, a morphological edge operator is adopted for a fast edge filtering, and the incremental Delaunay triangulation is modified to decrease the computation time by avoiding the calculation errors of whole pixels and selecting a vertex at the near of the centroid within the previous triangle. After the model generation, sparse vertices are matched, then the faces are evaluated with the size, approximation error, and disparity fluctuation of the face in evaluation stage. Thereafter, the faces which have a large error are selectively refined into smaller faces. Experimental results showed that the proposed algorithm could acquire an adaptive model with less modeling errors for both smooth and abrupt areas and could remarkably reduce the model acquisition time.

Characteristics of the Point-source Spectral Model for Odaesan Earthquake (M=4.8, '07. 1. 20) (오대산지진(M=4.8, '07. 1. 20)의 점지진원 스펙트럼 모델 특성)

  • Yun, Kwan-Hee;Park, Dong-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.4
    • /
    • pp.241-251
    • /
    • 2007
  • The observed spectra from Odaesan earthquake were fitted to a point-source spectral model to evaluate the source spectrum and spatial features of the modelling error. The source spectrum was calculated by removing from the observed spectra the path and site dependent responses (Yun, 2007) that were previously revealed through an inversion process applied to a large accumulated spectral dataset. The stress drop parameter of one-corner Brune's ${\omega}^2$ source model fitted to the estimated source spectrum was well predicted by the scaling relation between magnitude and stress drop developed by Yun et al. (2006). In particular, the estimated spectrum was quite comparable to the two-corner source model that was empirically developed for recent moderate earthquakes occurring around the Korean Peninsula, which indicates that Odaesan earthquake is one of typical moderate earthquakes representative of Korean Peninsula. Other features of the observed spectra from Odaesan earthquake were also evaluated based on the commonly treated random error between the observed data and the estimated point-source spectral model. Radiation pattern of the error according to azimuth angle was found to be similar to the theoretical estimate. It was also observed that the spatial distribution of the errors was correlated with the geological map and the $Q_0$ map which are indicatives of seismic boundaries.

Estimation of methane emissions from local and crossbreed beef cattle in Daklak province of Vietnam

  • Ramirez-Restrepo, Carlos Alberto;Van Tien, Dung;Le Duc, Ngoan;Herrero, Mario;Le Dinh, Phung;Van, Dung Dinh;Le Thi Hoa, Sen;Chi, Cuong Vu;Solano-Patino, Cesar;Lerner, Amy M.;Searchinger, Timothy D.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.30 no.7
    • /
    • pp.1054-1060
    • /
    • 2017
  • Objective: This study was aimed at evaluating effects of cattle breed resources and alternative mixed-feeding practices on meat productivity and emission intensities from household farming systems (HFS) in Daklak Province, Vietnam. Methods: Records from Local $Yellow{\time}Red$ Sindhi (Bos indicus; Lai Sind) and 1/2 Limousin, 1/2 Drought Master, and 1/2 Red Angus cattle during the growth (0 to 21 months) and fattening (22 to 25 months) periods were used to better understand variations on meat productivity and enteric methane emissions. Parameters were determined by the ruminant model. Four scenarios were developed: (HFS1) grazing from birth to slaughter on native grasses for approximately 10 h plus 1.5 kg dry matter/d (0.8% live weight [LW]) of a mixture of guinea grass (19%), cassava (43%) powder, cotton (23%) seed, and rice (15%) straw; (HFS2) growth period fed with elephant grass (1% of LW) plus supplementation (1.5% of LW) of rice bran (36%), maize (33%), and cassava (31%) meals; and HFS3 and HFS4 computed elephant grass, but concentrate supplementation reaching 2% and 1% of LW, respectively. Results: Results show that compared to HFS1, emissions ($72.3{\pm}0.96kg\;CH_4/animal/life$; least squares $means{\pm}standard$ error of the mean) were 15%, 6%, and 23% lower (p<0.01) for the HFS2, HFS3, and HFS4, respectively. The predicted methane efficiencies ($CO_2eq$) per kg of LW at slaughter ($4.3{\pm}0.15$), carcass weight ($8.8{\pm}0.25kg$) and kg of edible protein ($44.1{\pm}1.29$) were also lower (p<0.05) in the HFS4. In particular, irrespective of the HSF, feed supply and ratio changes had a more positive impact on emission intensities when crossbred 1/2 Red Angus cattle were fed than in their crossbred counterparts. Conclusion: Modest improvements on feeding practices and integrated modelling frameworks may offer potential trade-offs to respond to climate change in Vietnam.

Discrete element simulations of continental collision in Asia (아시아 대륙충돌의 개별요소 시뮬레이션)

  • Tanaka Atsushi;Sanada Yoshinori;Yamada Yasuhiro;Matsuoka Toshifumi;Ashida Yuzuru
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • Analogue physical modelling using granular materials (i.e., sandbox experiments) has been applied with great success to a number of geological problems at various scales. Such physical experiments can also be simulated numerically with the Discrete Element Method (DEM). In this study, we apply the DEM simulation to the collision between the Indian subcontinent and the Eurasian Plate, one of the most significant current tectonic processes in the Earth. DEM simulation has been applied to various kinds of dynamic modelling, not only in structural geology but also in soil mechanics, rock mechanics, and the like. As the target of the investigation is assumed to be an assembly of many tiny particles, DEM simulation makes it possible to treat an object with large and discontinuous deformations. However, in DEM simulations, we often encounter difficulties when we examine the validity of the input parameters, since little is known about the relationship between the input parameters for each particle and the properties of the whole assembly. Therefore, in our previous studies (Yamada et al.,2002a,2002b,2002c), we were obliged to tune the input parameters by trial and error. To overcome these difficulties, we introduce a numerical biaxial test with the DEM simulation. Using the results of this numerical test, we examine the validity of the input parameters used in the collision model. The resulting collision model is quite similar to the real deformation observed in eastern Asia, and compares well with GPS data and in-situ stress data in eastern Asia.

STL Generation in Reverse Engineering by Delaunay Triangulation (역공학에서의 Delaunay 삼각형 분할에 의한 STL 파일 생성)

  • Lee, Seok-Hui;Kim, Ho-Chan;Heo, Seong-Min
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.5
    • /
    • pp.803-812
    • /
    • 2002
  • Reverse engineering has been widely used for the shape reconstruction of an object without CAD data and the measurement of clay or wood models for the development of new products. To generate a surface from measured points by a laser scanner, typical steps include the scanning of a clay or wood model and the generation of manufacturing data like STL file. A laser scanner has a great potential to get geometrical data of a model for its fast measuring speed and higher precision. The data from a laser scanner are composed of many line stripes of points. A new approach to remove point data with Delaunay triangulation is introduced to deal with problems during reverse engineering process. The selection of group of triangles to be triangulated based on the angle between triangles is used for robust and reliable implementation of Delaunay triangulation as preliminary steps. Developed software enables the user to specify the criteria for the selection of group of triangles either by the angle between triangles or the percentage of triangles reduced. The time and error for handling point data during modelling process can be reduced and thus RP models with accuracy will be helpful to automated process.