• Title/Summary/Keyword: Civil-engineering dataset

Search Result 202, Processing Time 0.027 seconds

A GMDH-based estimation model for axial load capacity of GFRP-RC circular columns

  • Mohammed Berradia;El Hadj Meziane;Ali Raza;Mohamed Hechmi El Ouni;Faisal Shabbir
    • Steel and Composite Structures
    • /
    • v.49 no.2
    • /
    • pp.161-180
    • /
    • 2023
  • In the previous research, the axial compressive capacity models for the glass fiber-reinforced polymer (GFRP)-reinforced circular concrete compression elements restrained with GFRP helix were put forward based on small and noisy datasets by considering a limited number of parameters portraying less accuracy. Consequently, it is important to recommend an accurate model based on a refined and large testing dataset that considers various parameters of such components. The core objective and novelty of the current research is to suggest a deep learning model for the axial compressive capacity of GFRP-reinforced circular concrete columns restrained with a GFRP helix utilizing various parameters of a large experimental dataset to give the maximum precision of the estimates. To achieve this aim, a test dataset of 61 GFRP-reinforced circular concrete columns restrained with a GFRP helix has been created from prior studies. An assessment of 15 diverse theoretical models is carried out utilizing different statistical coefficients over the created dataset. A novel model utilizing the group method of data handling (GMDH) has been put forward. The recommended model depicted good effectiveness over the created dataset by assuming the axial involvement of GFRP main bars and the confining effectiveness of transverse GFRP helix and depicted the maximum precision with MAE = 195.67, RMSE = 255.41, and R2 = 0.94 as associated with the previously recommended equations. The GMDH model also depicted good effectiveness for the normal distribution of estimates with only a 2.5% discrepancy from unity. The recommended model can accurately calculate the axial compressive capacity of FRP-reinforced concrete compression elements that can be considered for further analysis and design of such components in the field of structural engineering.

Building-up and Feasibility Study of Image Dataset of Field Construction Equipments for AI Training (인공지능 학습용 토공 건설장비 영상 데이터셋 구축 및 타당성 검토)

  • Na, Jong Ho;Shin, Hyu Soun;Lee, Jae Kang;Yun, Il Dong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.99-107
    • /
    • 2023
  • Recently, the rate of death and safety accidents at construction sites is the highest among all kinds of industries. In order to apply artificial intelligence technology to construction sites, it is essential to secure a dataset which can be used as a basic training data. In this paper, a number of image data were collected through actual construction site, for which major construction equipment objects mainly operated in civil engineering sites were defined. The optimal training dataset construction was completed by annotation process of about 90,000 image dataset. Reliability of the dataset was verified with the mAP of over 90 % in use of YOLO, a representative model in the field of object detection. The construction equipment training dataset built in this study has been released which is currently available on the public data portal of the Ministry of Public Administration and Security. This dataset is expected to be freely used for any application of object detection technology on construction sites especially in the field of construction safety in the future.

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

An active learning method with difficulty learning mechanism for crack detection

  • Shu, Jiangpeng;Li, Jun;Zhang, Jiawei;Zhao, Weijian;Duan, Yuanfeng;Zhang, Zhicheng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.195-206
    • /
    • 2022
  • Crack detection is essential for inspection of existing structures and crack segmentation based on deep learning is a significant solution. However, datasets are usually one of the key issues. When building a new dataset for deep learning, laborious and time-consuming annotation of a large number of crack images is an obstacle. The aim of this study is to develop an approach that can automatically select a small portion of the most informative crack images from a large pool in order to annotate them, not to label all crack images. An active learning method with difficulty learning mechanism for crack segmentation tasks is proposed. Experiments are carried out on a crack image dataset of a steel box girder, which contains 500 images of 320×320 size for training, 100 for validation, and 190 for testing. In active learning experiments, the 500 images for training are acted as unlabeled image. The acquisition function in our method is compared with traditional acquisition functions, i.e., Query-By-Committee (QBC), Entropy, and Core-set. Further, comparisons are made on four common segmentation networks: U-Net, DeepLabV3, Feature Pyramid Network (FPN), and PSPNet. The results show that when training occurs with 200 (40%) of the most informative crack images that are selected by our method, the four segmentation networks can achieve 92%-95% of the obtained performance when training takes place with 500 (100%) crack images. The acquisition function in our method shows more accurate measurements of informativeness for unlabeled crack images compared to the four traditional acquisition functions at most active learning stages. Our method can select the most informative images for annotation from many unlabeled crack images automatically and accurately. Additionally, the dataset built after selecting 40% of all crack images can support crack segmentation networks that perform more than 92% when all the images are used.

Prediction of terminal density through a two-surface plasticity model

  • Won, Jongmuk;Kim, Jongchan;Park, Junghee
    • Geomechanics and Engineering
    • /
    • v.23 no.5
    • /
    • pp.493-502
    • /
    • 2020
  • The prediction of soil response under repetitive mechanical loadings remains challenging in geotechnical engineering applications. Modeling the cyclic soil response requires a robust model validation with an experimental dataset. This study proposes a unique method adopting linearity of model constant with the number of cycles. The model allows the prediction of the terminal density of sediments when subjected to repetitive changes in pore-fluid pressure based on the two-surface plasticity. Model simulations are analyzed in combination with an experimental dataset of sandy sediments when subjected to repetitive changes in pore fluid pressure under constant deviatoric stress conditions. The results show that the modified plastic moduli in the two-surface plasticity model appear to be critical for determining the terminal density. The methodology introduced in this study is expected to contribute to the prediction of the terminal density and the evolution of shear strain at given repetitive loading conditions.

Derivation of analytical fragility curves using SDOF models of masonry structures in Erzincan (Turkey)

  • Karimzadeh, Shaghayegh;Kadas, Koray;Askan, Aysegul;Erberik, M. Altug;Yakut, Ahmet
    • Earthquakes and Structures
    • /
    • v.18 no.2
    • /
    • pp.249-261
    • /
    • 2020
  • Seismic loss estimation studies require fragility curves which are usually derived using ground motion datasets. Ground motion records can be either in the form of recorded or simulated time histories compatible with regional seismicity. The main purpose of this study is to investigate the use of alternative ground motion datasets (simulated and real) on the fragility curves. Simulated dataset is prepared considering regional seismicity parameters corresponding to Erzincan using the stochastic finite-fault technique. In addition, regionally compatible records are chosen from the NGA-West2 ground motion database to form the real dataset. The paper additionally studies the effects of hazard variability and two different fragility curve derivation approaches on the generated fragility curves. As the final step for verification purposes, damage states estimated for the fragility curves derived using alternative approaches are compared with the observed damage levels from the 1992 Erzincan (Turkey) earthquake (Mw=6.6). In order to accomplish all these steps, a set of representative masonry buildings from Erzincan region are analyzed using simplified structural models. The results reveal that regionally simulated ground motions can be used alternatively in fragility analyses and damage estimation studies.

Evaluating flexural strength of concrete with steel fibre by using machine learning techniques

  • Sharma, Nitisha;Thakur, Mohindra S.;Upadhya, Ankita;Sihag, Parveen
    • Composite Materials and Engineering
    • /
    • v.3 no.3
    • /
    • pp.201-220
    • /
    • 2021
  • In this study, potential of three machine learning techniques i.e., M5P, Support vector machines and Gaussian processes were evaluated to find the best algorithm for the prediction of flexural strength of concrete mix with steel fibre. The study comprises the comparison of results obtained from above-said techniques for given dataset. The dataset consists of 124 observations from past research studies and this dataset is randomly divided into two subsets namely training and testing datasets with (70-30)% proportion by weight. Cement, fine aggregates, coarse aggregates, water, super plasticizer/ high-range water reducer, steel fibre, fibre length and curing days were taken as input parameters whereas flexural strength of the concrete mix was taken as the output parameter. Performance of the techniques was checked by statistic evaluation parameters. Results show that the Gaussian process technique works better than other techniques with its minimum error bandwidth. Statistical analysis shows that the Gaussian process predicts better results with higher coefficient of correlation value (0.9138) and minimum mean absolute error (1.2954) and Root mean square error value (1.9672). Sensitivity analysis proves that steel fibre is the significant parameter among other parameters to predict the flexural strength of concrete mix. According to the shape of the fibre, the mixed type performs better for this data than the hooked shape of the steel fibre, which has a higher CC of 0.9649, which shows that the shape of fibers do effect the flexural strength of the concrete. However, the intricacy of the mixed fibres needs further investigations. For future mixes, the most favorable range for the increase in flexural strength of concrete mix found to be (1-3)%.

Site-Specific Error-Cross Correlation-Informed Quadruple Collocation Approach for Improved Global Precipitation Estimates

  • Alcantara, Angelika;Ahn Kuk-Hyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.180-180
    • /
    • 2023
  • To improve global risk management, understanding the characteristics and distribution of precipitation is crucial. However, obtaining spatially and temporally resolved climatic data remains challenging due to sparse gauge observations and limited data availability, despite the use of satellite and reanalysis products. To address this challenge, merging available precipitation products has been introduced to generate spatially and temporally reliable data by taking advantage of the strength of the individual products. However, most of the existing studies utilize all the available products without considering the varying performances of each dataset in different regions. Comprehensively considering the relative contributions of each parent dataset is necessary since their contributions may vary significantly and utilizing all the available datasets for data merging may lead to significant data redundancy issues. Hence, for this study, we introduce a site-specific precipitation merging method that utilizes the Quadruple Collocation (QC) approach, which acknowledges the existence of error-cross correlation between the parent datasets, to create a high-resolution global daily precipitation data from 2001-2020. The performance of multiple gridded precipitation products are first evaluated per region to determine the best combination of quadruplets to be utilized in estimating the error variances through the QC approach and computation of merging weights. The merged precipitation is then computed by adding the precipitation from each dataset in the quadruplet multiplied by each respective merging weight. Our results show that our approach holds promise for generating reliable global precipitation data for data-scarce regions lacking spatially and temporally resolved precipitation data.

  • PDF

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

Application of transfer learning for streamflow prediction by using attention-based Informer algorithm

  • Fatemeh Ghobadi;Doosun Kang
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.165-165
    • /
    • 2023
  • Streamflow prediction is a critical task in water resources management and essential for planning and decision-making purposes. However, the streamflow prediction is challenging due to the complexity and non-linear nature of hydrological processes. The transfer learning is a powerful technique that enables a model to transfer knowledge from a source domain to a target domain, improving model performance with limited data in the target domain. In this study, we apply the transfer learning using the Informer model, which is a state-of-the-art deep learning model for streamflow prediction. The model was trained on a large-scale hydrological dataset in the source basin and then fine-tuned using a smaller dataset available in the target basin to predict the streamflow in the target basin. The results demonstrate that transfer learning using the Informer model significantly outperforms the traditional machine learning models and even other deep learning models for streamflow prediction, especially when the target domain has limited data. Moreover, the results indicate the effectiveness of streamflow prediction when knowledge transfer is used to improve the generalizability of hydrologic models in data-sparse regions.

  • PDF