• Title/Summary/Keyword: common data model

Search Result 1,253, Processing Time 0.033 seconds

Computational modelling for description of rubber-like materials with permanent deformation under cyclic loading

  • Guo, Z.Q.;Sluys, L.J.
    • Interaction and multiscale mechanics
    • /
    • v.1 no.3
    • /
    • pp.317-328
    • /
    • 2008
  • When carbon-filled rubber specimens are subjected to cyclic loading, they do not return to their initial state after loading and subsequent unloading, but exhibit a residual strain or permanent deformation. We propose a specific form of the pseudo-elastic energy function to represent cyclic loading for incompressible, isotropic materials with stress softening and residual strain. The essence of the pseudo-elasticity theory is that material behaviour in the primary loading path is described by a common elastic strain energy function, and in unloading, reloading or secondary unloading paths by a different strain energy function. The switch between strain energy functions is controlled by the incorporation of a damage variable into the strain energy function. An extra term is added to describe the permanent deformation. The finite element implementation of the proposed model is presented in this paper. All parameters in the proposed model and elastic law can be easily estimated based on experimental data. The numerical analyses show that the results are in good agreement with experimental data.

Stochastic simulation based on copula model for intermittent monthly streamflows in arid regions

  • Lee, Taesam;Jeong, Changsam;Park, Taewoong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.488-488
    • /
    • 2015
  • Intermittent streamflow is common phenomenon in arid and semi-arid regions. To manage water resources of intermittent streamflows, stochactic simulation data is essential; however the seasonally stochastic modeling for intermittent streamflow is a difficult task. In this study, using the periodic Markov chain model, we simulate intermittent monthly streamflow for occurrence and the periodic gamma autoregressive and copula models for amount. The copula models were tested in a previous study for the simulation of yearly streamflow, resulting in successful replication of the key and operational statistics of historical data; however, the copula models have never been tested on a monthly time scale. The intermittent models were applied to the Colorado River system in the present study. A few drawbacks of the PGAR model were identified, such as significant underestimation of minimum values on an aggregated yearly time scale and restrictions of the parameter boundaries. Conversely, the copula models do not present such drawbacks but show feasible reproduction of key and operational statistics. We concluded that the periodic Markov chain based the copula models is a practicable method to simulate intermittent monthly streamflow time series.

  • PDF

A Model for Machine Fault Diagnosis based on Mutual Exclusion Theory and Out-of-Distribution Detection

  • Cui, Peng;Luo, Xuan;Liu, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2927-2941
    • /
    • 2022
  • The primary task of machine fault diagnosis is to judge whether the current state is normal or damaged, so it is a typical binary classification problem with mutual exclusion. Mutually exclusive events and out-of-domain detection have one thing in common: there are two types of data and no intersection. We proposed a fusion model method to improve the accuracy of machine fault diagnosis, which is based on the mutual exclusivity of events and the commonality of out-of-distribution detection, and finally generalized to all binary classification problems. It is reported that the performance of a convolutional neural network (CNN) will decrease as the recognition type increases, so the variational auto-encoder (VAE) is used as the primary model. Two VAE models are used to train the machine's normal and fault sound data. Two reconstruction probabilities will be obtained during the test. The smaller value is transformed into a correction value of another value according to the mutually exclusive characteristics. Finally, the classification result is obtained according to the fusion algorithm. Filtering normal data features from fault data features is proposed, which shields the interference and makes the fault features more prominent. We confirm that good performance improvements have been achieved in the machine fault detection data set, and the results are better than most mainstream models.

Predicting link of R&D network to stimulate collaboration among education, industry, and research (산학연 협업 활성화를 위한 R&D 네트워크 연결 예측 연구)

  • Park, Mi-yeon;Lee, Sangheon;Jin, Guocheng;Shen, Hongme;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.37-52
    • /
    • 2015
  • The recent global trends display expansion and growing solidity in both cooperative collaboration between industry, education, and research and R&D network systems. A greater support for the network and cooperative research sector would open greater possibilities for the evolution of new scholar and industrial fields and the development of new theories evoked from synergized educational research. Similarly, the national need for a strategy that can most efficiently and effectively support R&D network that are established through the government's R&D project research is on the rise. Despite the growing urgency, due to the habitual dependency on simple individual personal information data regarding R&D industry participants and generalized statistical data references, the policies concerning network system are disappointing and inadequate. Accordingly, analyses of the relationships involved for each subject who is participating in the R&D industry was conducted and on the foundation of an educational-industrial-research network system, possible changes within and of the network that may arise were predicted. To predict the R&D network transitions, Common Neighbor and Jaccard's Coefficient models were designated as the basic foundational models, upon which a new prediction model was proposed to address the limitations of the two aforementioned former models and to increase the accuracy of Link Prediction, with which a comparative analysis was made between the two models. Through the effective predictions regarding R&D network changes and transitions, such study result serves as a stepping-stone for an establishment of a prospective strategy that supports a desirable educational-industrial-research network and proposes a measure to promote the national policy to one that can effectively and efficiently sponsor integrated R&D industries. Though both weighted applications of Common Neighbor and Jaccard's Coefficient models provided positive outcomes, improved accuracy was comparatively more prevalent in the weighted Common Neighbor. An un-weighted Common Neighbor model predicted 650 out of 4,136 whereas a weighted Common Neighbor model predicted 50 more results at a total of 700 predictions. While the Jaccard's model demonstrated slight performance improvements in numeric terms, the differences were found to be insignificant.

Exploration of Predictive Model for Learning Achievement of Behavior Log Using Machine Learning in Video-based Learning Environment (동영상 기반 학습 환경에서 머신러닝을 활용한 행동로그의 학업성취 예측 모형 탐색)

  • Lee, Jungeun;Kim, Dasom;Jo, Il-Hyun
    • The Journal of Korean Association of Computer Education
    • /
    • v.23 no.2
    • /
    • pp.53-64
    • /
    • 2020
  • As online learning forms centered on video lectures become more common and constantly increasing, the video-based learning environment applying various educational methods is also changing and developing to enhance learning effectiveness. Learner's log data has emerged for measuring the effectiveness of education in the online learning environment, and various analysis methods of log data are important for learner's customized learning prescriptions. To this end, the study analyzed learner behavior data and predictions of achievement by machine learning in video-based learning environments. As a result, interactive behaviors such as video navigation and comment writing, and learner-led learning behaviors predicted achievement in common in each model. Based on the results, the study provided implications for the design of the video learning environment.

Regional Traffic Accident Model of Elderly Drivers based on Urban Decline Index (도시쇠퇴 지표를 적용한 지역별 고령운전자 교통사고 영향 분석)

  • Park, Na Young;Park, Byung Ho
    • Journal of the Korean Society of Safety
    • /
    • v.32 no.6
    • /
    • pp.137-142
    • /
    • 2017
  • This study deals with the relation between traffic accident and urban decline. The purpose of this study is to develop the regional accident models of elderly drivers. In order to develop the count data models, 2009-2015 traffic accident data from TAAS(traffic accident analysis system) and urban decline data from urban regeneration information system are collected. The main results are as follows. First, the null hypothesis that there is no difference in the accident number between elderly and non-elderly drivers is rejected. Second, 8 accident models which are all statistically significant have been developed. Finally, common variables between elderly and non-elderly are ratio of elderly people, elderly person living alone/1,000 persons and wholesale/retail employments/1,000 persons. This study could be expected to give many implications to making regional accident reduction policy.

An application of damage detection technique to the railway tunnel lining (철도터널 라이닝에 대한 손상도 파악기법의 현장적용)

  • Bang Choon-seok;Lee Jun S.;Choi Il-Yoon;Lee Hee-Up;Kim Yun Tae
    • Proceedings of the KSR Conference
    • /
    • 2004.06a
    • /
    • pp.1142-1147
    • /
    • 2004
  • In this study, two damage detection techniques are applied to the railway tunnel liner based on the static deformation data. Models based on uniform reduction of stiffness and smeared crack concept are both employed, and the efficiency and relative advantage are compared with each other. Numerical analyses are performed on the idealized tunnel structure and the effect of white noise, common in most measurement data, is also investigated to better understand the suitability of the proposed models. As a result, model 1 based on uniform stiffness reduction method is shown to be relatively insensitive to the noise, while model 2 with the smeared crack concept is proven to be easily applied to the field situation since the effect of stiffness reduction is rather small. Finally, real deformation data of a rail tunnel in which health monitoring system is in operation are introduced to find the possible damage and it is shown that the prediction shows quite satisfactory result.

  • PDF

A computational note on maximum likelihood estimation in random effects panel probit model

  • Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.3
    • /
    • pp.315-323
    • /
    • 2019
  • Panel data sets have recently been developed in various areas, and many recent studies have analyzed panel, or longitudinal data sets. Often a dichotomous dependent variable occur in survival analysis, biomedical and epidemiological studies that is analyzed by a generalized linear mixed effects model (GLMM). The most common estimation method for the binary panel data may be the maximum likelihood (ML). Many statistical packages provide ML estimates; however, the estimates are computed from numerically approximated likelihood function. For instance, R packages, pglm (Croissant, 2017) approximate the likelihood function by the Gauss-Hermite quadratures, while Rchoice (Sarrias, Journal of Statistical Software, 74, 1-31, 2016) use a Monte Carlo integration method for the approximation. As a result, it can be observed that different packages give different results because of different numerical computation methods. In this note, we discuss the pros and cons of numerical methods compared with the exact computation method.

Development of Two Dimensional Extension Model far IFC2.x2 Model in the Construction Field (건설 분야 전자도면의 모델 기반 교환을 위한 IFC2.x2모델의 2차원 형상정보모델의 확장 개발에 관한 기초 연구)

  • Kim I.H.;Seo J.C.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.10 no.2
    • /
    • pp.121-132
    • /
    • 2005
  • There have been several efforts for the investigation of the formal development team which was formed in the IAI to develop a common 2D standard specification between ISO/STEP and IAI/IFC since 2002. As a result, a drafting model has been included in the IFC2.x2 model. However, to be used actively in the construction practice for construction drawing exchange, the IFC model should be extended to the paper space for multiple views, drawing output, and delivery of drawings. Therefore, in this paper, the methodology of relating STEP and IFC has been investigated and schema extension of paper space(drawing sheet, presentation view, view pipeline), complex entity(leader), and dimension(associative) have been achieved. The resulting, IFC model will enable a basic harmonization with KOSDIC. SCADEC, and STEP-CDS by retaining the current IFC architecture. In addition, IT systems for the construction industry can be beneficial from the developed data model.

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF