• Title/Summary/Keyword: common data model

Search Result 1,253, Processing Time 0.032 seconds

A FRF-based algorithm for damage detection using experimentally collected data

  • Garcia-Palencia, Antonio;Santini-Bell, Erin;Gul, Mustafa;Catbas, Necati
    • Structural Monitoring and Maintenance
    • /
    • v.2 no.4
    • /
    • pp.399-418
    • /
    • 2015
  • Automated damage detection through Structural Health Monitoring (SHM) techniques has become an active area of research in the bridge engineering community but widespread implementation on in-service infrastructure still presents some challenges. In the meantime, visual inspection remains as the most common method for condition assessment even though collected information is highly subjective and certain types of damage can be overlooked by the inspector. In this article, a Frequency Response Functions-based model updating algorithm is evaluated using experimentally collected data from the University of Central Florida (UCF)-Benchmark Structure. A protocol for measurement selection and a regularization technique are presented in this work in order to provide the most well-conditioned model updating scenario for the target structure. The proposed technique is composed of two main stages. First, the initial finite element model (FEM) is calibrated through model updating so that it captures the dynamic signature of the UCF Benchmark Structure in its healthy condition. Second, based upon collected data from the damaged condition, the updating process is repeated on the baseline (healthy) FEM. The difference between the updated parameters from subsequent stages revealed both location and extent of damage in a "blind" scenario, without any previous information about type and location of damage.

Detecting Common Weakness Enumeration(CWE) Based on the Transfer Learning of CodeBERT Model (CodeBERT 모델의 전이 학습 기반 코드 공통 취약점 탐색)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.431-436
    • /
    • 2023
  • Recently the incorporation of artificial intelligence approaches in the field of software engineering has been one of the big topics. In the world, there are actively studying in two directions: 1) software engineering for artificial intelligence and 2) artificial intelligence for software engineering. We attempt to apply artificial intelligence to software engineering to identify and refactor bad code module areas. To learn the patterns of bad code elements well, we must have many datasets with bad code elements labeled correctly for artificial intelligence in this task. The current problems have insufficient datasets for learning and can not guarantee the accuracy of the datasets that we collected. To solve this problem, when collecting code data, bad code data is collected only for code module areas with high-complexity, not the entire code. We propose a method for exploring common weakness enumeration by learning the collected dataset based on transfer learning of the CodeBERT model. The CodeBERT model learns the corresponding dataset more about common weakness patterns in code. With this approach, we expect to identify common weakness patterns more accurately better than one in traditional software engineering.

Analysis and Forecasting of Daily Bulk Shipping Freight Rates Using Error Correction Models (오차교정모형을 활용한 일간 벌크선 해상운임 분석과 예측)

  • Ko, Byoung-Wook
    • Journal of Korea Port Economic Association
    • /
    • v.39 no.2
    • /
    • pp.129-141
    • /
    • 2023
  • This study analyzes the dynamic characteristics of daily freight rates of dry bulk and tanker shipping markets and their forecasting accuracy by using the error correction models. In order to calculate the error terms from the co-integrated time series, this study uses the common stochastic trend model (CSTM model) and vector error correction model (VECM model). First, the error correction model using the error term from the CSTM model yields more appropriate results of adjustment speed coefficient than one using the error term from the VECM model. Furthermore, according to the adjusted determination coefficients (adjR2), the error correction model of CSTM-model error term shows more model fitness than that of VECM-model error term. Second, according to the criteria of mean absolute error (MAE) and mean absolute scaled error (MASE) which measure the forecasting accuracy, the results show that the error correction model with CSTM-model error term produces more accurate forecasts than that of VECM-model error term in the 12 cases among the total 15 cases. This study proposes the analysis and forecast tasks 1) using both of the CSTM-model and VECM-model error terms at the same time and 2) incorporating additional data of commodity and energy markets, and 3) differentiating the adjustment speed coefficients based the sign of the error term as the future research topics.

Calibrated Parameters with Consistency for Option Pricing in the Two-state Regime Switching Black-Scholes Model (국면전환 블랙-숄즈 모형에서 정합성을 가진 모수의 추정)

  • Han, Gyu-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.36 no.2
    • /
    • pp.101-107
    • /
    • 2010
  • Among a variety of asset dynamics models in order to explain the common properties of financial underlying assets, parametric models are meaningful when their parameters are set reliably. There are two main methods from which we can obtain them. They are to use time-series data of an underlying price or the market option prices of the underlying at one time. Based on the Girsanov theorem, in the pure diffusion models, the parameters calibrated from the option prices should be partially equivalent to those from time-series underling prices. We call this phenomenon model consistency. In this paper, we verify that the two-state regime switching Black-Scholes model is superior in the sense of model consistency, comparing with two popular conventional models, the Black-Scholes model and Heston model.

An Investigation of Consumer Satisfaction Model (고객만족 모형의 고찰)

  • 김철중
    • The Journal of Information Technology
    • /
    • v.2 no.1
    • /
    • pp.191-207
    • /
    • 1999
  • The study is in attempting for reviewing the selection problem of the measurement and the model, concerning a consumer satisfaction model. Therefore, a common model, which measures degree of consumer satisfaction by an arithmetic mean from measurement method including data, which assess compulsively the attribution and the importance to consumers, shows the problems of a field application. There showed a high predictive validity in the model of a singular item using the degree of a general satisfaction rather than a detailed assessment. However, the single model needs the model of consumer satisfaction from the using of plural items, because of the field problems that produce in an alternative application. There showed a high significance level in the model including variables, which are showing a high correlation between purchase intention and predictive validity.

  • PDF

Seismic qualification using the updated finite element model of structures

  • Sinha, Jyoti K.;Rao, A. Rama;Sinha, R.K.
    • Structural Engineering and Mechanics
    • /
    • v.19 no.1
    • /
    • pp.97-106
    • /
    • 2005
  • The standard practice is to seismically qualify the safety related equipment and structural components used in the nuclear power plants. Among several qualification approaches the qualification by the analysis using finite element (FE) method is the most common approach used in practice. However the predictions by the FE model for a structure is known to show significant deviations from the dynamic behaviour of 'as installed' structure in many cases. Considering such limitation, few researchers have advocated re-qualification of such structures after installation at site to enhance the confidence in qualification vis-$\grave{a}$-vis plant safety. For such an exercise the validation of FE model with experimental modal data is important. A validated FE model can be obtained by the Model Updating methods in conjugation with the in-situ experimental modal data. Such a model can then be used for qualification. Seismic analysis using the updated FE model and its advantage has been presented through an example of an in-core component - a perforated horizontal tube of a nuclear reactor.

Modeling of Photovoltaic Power Systems using Clustering Algorithm and Modular Networks (군집화 알고리즘 및 모듈라 네트워크를 이용한 태양광 발전 시스템 모델링)

  • Lee, Chang-Sung;Ji, Pyeong-Shik
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.65 no.2
    • /
    • pp.108-113
    • /
    • 2016
  • The real-world problems usually show nonlinear and multi-variate characteristics, so it is difficult to establish concrete mathematical models for them. Thus, it is common to practice data-driven modeling techniques in these cases. Among them, most widely adopted techniques are regression model and intelligent model such as neural networks. Regression model has drawback showing lower performance when much non-linearity exists between input and output data. Intelligent model has been shown its superiority to the linear model due to ability capable of effectively estimate desired output in cases of both linear and nonlinear problem. This paper proposes modeling method of daily photovoltaic power systems using ELM(Extreme Learning Machine) based modular networks. The proposed method uses sub-model by fuzzy clustering rather than using a single model. Each sub-model is implemented by ELM. To show the effectiveness of the proposed method, we performed various experiments by dataset acquired during 2014 in real-plant.

A Bayesian zero-inflated Poisson regression model with random effects with application to smoking behavior (랜덤효과를 포함한 영과잉 포아송 회귀모형에 대한 베이지안 추론: 흡연 자료에의 적용)

  • Kim, Yeon Kyoung;Hwang, Beom Seuk
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.2
    • /
    • pp.287-301
    • /
    • 2018
  • It is common to encounter count data with excess zeros in various research fields such as the social sciences, natural sciences, medical science or engineering. Such count data have been explained mainly by zero-inflated Poisson model and extended models. Zero-inflated count data are also often correlated or clustered, in which random effects should be taken into account in the model. Frequentist approaches have been commonly used to fit such data. However, a Bayesian approach has advantages of prior information, avoidance of asymptotic approximations and practical estimation of the functions of parameters. We consider a Bayesian zero-inflated Poisson regression model with random effects for correlated zero-inflated count data. We conducted simulation studies to check the performance of the proposed model. We also applied the proposed model to smoking behavior data from the Regional Health Survey (2015) of the Korea Centers for disease control and prevention.

Predicting Common Patterns of Livestock-Vehicle Movement Using GPS and GIS: A Case Study on Jeju Island, South Korea

  • Qasim, Waqas;Cho, Jea Min;Moon, Byeong Eun;Basak, Jayanta Kumar;Kahn, Fawad;Okyere, Frank Gyan;Yoon, Yong Cheol;Kim, Hyeon Tae
    • Journal of Biosystems Engineering
    • /
    • v.43 no.3
    • /
    • pp.247-254
    • /
    • 2018
  • Purpose: Although previous studies have performed on-farm evaluations for the control of airborne diseases such as foot-and-mouth disease (FMD) and influenza, disease control during the process of livestock and manure transportation has not been investigated thoroughly. The objective of this study is to predict common patterns of livestock-vehicle movement. Methods: Global positioning system (GPS) data collected during 2012 and 2013 from livestock vehicles on Jeju Island, South Korea, were analyzed. The GPS data included the coordinates of moving vehicles according to the time and date as well as the locations of livestock farms and manure-keeping sites. Data from 2012 were added to Esri software ArcGIS 10.1 and two approaches were adopted for predicting common vehicle-movement patterns, i.e., point-density and Euclidean-distance tools. To compare the predicted patterns with actual patterns for 2013, the same analysis was performed on the actual data. Results: When the manure-keeping sites and livestock farms were the same in both years, the common patterns of 2012 and 2013 were similar; however, differences arose in the patterns when these sites were changed. By using the point-density tool and Euclidean-distance tool, the average similarity between the predicted and actual common patterns for the three vehicles was 80% and 72%, respectively. Conclusions: From this analysis, we can determine common patterns of livestock vehicles using previous year's data. In the future, to obtain more accurate results and to devise a model for predicting patterns of vehicle movement, more dependent and independent variables will be considered.

Risk Evaluation of Failure Cause for FMEA under a Weibull Time Delay Model (와이블 지연시간 모형 하에서의 FMEA를 위한 고장원인의 위험평가)

  • Kwon, Hyuck Moo;Lee, Min Koo;Hong, Sung Hoon
    • Journal of the Korean Society of Safety
    • /
    • v.33 no.3
    • /
    • pp.83-91
    • /
    • 2018
  • This paper suggests a weibull time delay model to evaluate failure risks in FMEA(failure modes and effects analysis). Assuming three types of loss functions for delayed time in failure cause detection, the risk of each failure cause is evaluated as its occurring frequency and expected loss. Since the closed form solution of the risk metric cannot be obtained, a statistical computer software R program is used for numerical calculation. When the occurrence and detection times have a common shape parameter, though, some simple results of mathematical derivation are also available. As an enormous quantity of field data becomes available under recent progress of data acquisition system, the proposed risk metric will provide a more practical and reasonable tool for evaluating the risks of failure causes in FMEA.