• Title/Summary/Keyword: stochastic matrices

Search Result 49, Processing Time 0.029 seconds

Modal identification of Canton Tower under uncertain environmental conditions

  • Ye, Xijun;Yan, Quansheng;Wang, Weifeng;Yu, Xiaolin
    • Smart Structures and Systems
    • /
    • v.10 no.4_5
    • /
    • pp.353-373
    • /
    • 2012
  • The instrumented Canton Tower is a 610 m high-rise structure, which has been considered as a benchmark problem for structural health monitoring (SHM) research. In this paper, an improved automatic modal identification method is presented based on a natural excitation technique in conjunction with the eigensystem realization algorithm (NExT/ERA). In the proposed modal identification method, damping ratio, consistent mode indicator from observability matrices (CMI_O) and modal amplitude coherence (MAC) are used as criteria to distinguish the physically true modes from spurious modes. Enhanced frequency domain decomposition (EFDD), the data-driven stochastic subspace identification method (SSI-DATA) and the proposed method are respectively applied to extract the modal parameters of the Canton Tower under different environmental conditions. Results of modal parameter identification based on output-only measurements are presented and discussed. User-selected parameters used in those methods are suggested and discussed. Furthermore, the effect of environmental conditions on the dynamic characteristics of Canton tower is investigated.

Optimal sensor placements for system identification of concrete arch dams

  • Altunisik, Ahmet Can;Sevim, Baris;Sunca, Fezayil;Okur, Fatih Yesevi
    • Advances in concrete construction
    • /
    • v.11 no.5
    • /
    • pp.397-407
    • /
    • 2021
  • This paper investigates the optimal sensor placements and capabilities of this procedure for dynamic characteristics identification of arch dams. For this purpose, a prototype arch dam is constructed in laboratory conditions. Berke arch dam located on the Ceyhan River in city of Osmaniye is one of the highest arch dam constructed in Turkey is selected for field verification. The ambient vibration tests are conducted using initial candidate sensor locations at the beginning of the study. Enhanced Frequency Domain Decomposition and Stochastic Subspace Identification methods are used to extract experimental dynamic characteristics. Then, measurements are repeated according to optimal sensor locations of the dams. These locations are specified using the Effective Independence Method. To determine the optimal sensor locations, the target mode shape matrices which are obtained from ambient vibration tests of the selected dam with a large number of accelerometers are used. The dynamic characteristics obtained from each ambient vibrations tests are compared with each other. It is concluded that the dynamic characteristics obtained from initial measurements and those obtained from a limited number of sensors are compatible with each other. This situation indicates that optimal sensor placements determined by the Effective Independence Method are useful for dynamic characteristics identification of arch dams.

Weighted Integral Method for an Estimation of Displacement COV of Laminated Composite Plates (복합적층판의 변위 변동계수 산정을 위한 가중적분법)

  • Noh, Hyuk-Chun
    • Journal of the Korean Society for Advanced Composite Structures
    • /
    • v.1 no.2
    • /
    • pp.29-35
    • /
    • 2010
  • In addition to the Young's modulus, the Poisson's ratio is also at the center of attention in the field stochastic finite element analysis since the parameters play an important role in determining structural behavior. Accordingly, the sole effect of this parameter on the response variability is of importance from the perspective of estimation of uncertain response. To this end, a formulation to determine the response variability in laminate composite plates due to the spatial randomness of Poisson's ratio is suggested. The independent contributions of random Poisson's ratiocan be captured in terms of sub-matrices which include the effect of the random parameter in the same order, which can be attained by using the Taylor's series expansion about the mean of the parameter. In order to validate the adequacy of the proposed formulation, several example analyses are performed, and then the results are compared with Monte Carlo simulation (MCS). A good agreement between the suggested scheme and MCS is observed showing the adequacy of the scheme.

  • PDF

The Strength of the Relationship between Semantic Similarity and the Subcategorization Frames of the English Verbs: a Stochastic Test based on the ICE-GB and WordNet (영어 동사의 의미적 유사도와 논항 선택 사이의 연관성 : ICE-GB와 WordNet을 이용한 통계적 검증)

  • Song, Sang-Houn;Choe, Jae-Woong
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.113-144
    • /
    • 2010
  • The primary goal of this paper is to find a feasible way to answer the question: Does the similarity in meaning between verbs relate to the similarity in their subcategorization? In order to answer this question in a rather concrete way on the basis of a large set of English verbs, this study made use of various language resources, tools, and statistical methodologies. We first compiled a list of 678 verbs that were selected from the most and second most frequent word lists from the Colins Cobuild English Dictionary, which also appeared in WordNet 3.0. We calculated similarity measures between all the pairs of the words based on the 'jcn' algorithm (Jiang and Conrath, 1997) implemented in the WordNet::Similarity module (Pedersen, Patwardhan, and Michelizzi, 2004). The clustering process followed, first building similarity matrices out of the similarity measure values, next drawing dendrograms on the basis of the matricies, then finally getting 177 meaningful clusters (covering 437 verbs) that passed a certain level set by z-score. The subcategorization frames and their frequency values were taken from the ICE-GB. In order to calculate the Selectional Preference Strength (SPS) of the relationship between a verb and its subcategorizations, we relied on the Kullback-Leibler Divergence model (Resnik, 1996). The SPS values of the verbs in the same cluster were compared with each other, which served to give the statistical values that indicate how much the SPS values overlap between the subcategorization frames of the verbs. Our final analysis shows that the degree of overlap, or the relationship between semantic similarity and the subcategorization frames of the verbs in English, is equally spread out from the 'very strongly related' to the 'very weakly related'. Some semantically similar verbs share a lot in terms of their subcategorization frames, and some others indicate an average degree of strength in the relationship, while the others, though still semantically similar, tend to share little in their subcategorization frames.

  • PDF

CLUSTERING DNA MICROARRAY DATA BY STOCHASTIC ALGORITHM

  • Shon, Ho-Sun;Kim, Sun-Shin;Wang, Ling;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.438-441
    • /
    • 2007
  • Recently, due to molecular biology and engineering technology, DNA microarray makes people watch thousands of genes and the state of variation from the tissue samples of living body. With DNA Microarray, it is possible to construct a genetic group that has similar expression patterns and grasp the progress and variation of gene. This paper practices Cluster Analysis which purposes the discovery of biological subgroup or class by using gene expression information. Hence, the purpose of this paper is to predict a new class which is unknown, open leukaemia data are used for the experiment, and MCL (Markov CLustering) algorithm is applied as an analysis method. The MCL algorithm is based on probability and graph flow theory. MCL simulates random walks on a graph using Markov matrices to determine the transition probabilities among nodes of the graph. If you look at closely to the method, first, MCL algorithm should be applied after getting the distance by using Euclidean distance, then inflation and diagonal factors which are tuning modulus should be tuned, and finally the threshold using the average of each column should be gotten to distinguish one class from another class. Our method has improved the accuracy through using the threshold, namely the average of each column. Our experimental result shows about 70% of accuracy in average compared to the class that is known before. Also, for the comparison evaluation to other algorithm, the proposed method compared to and analyzed SOM (Self-Organizing Map) clustering algorithm which is divided into neural network and hierarchical clustering. The method shows the better result when compared to hierarchical clustering. In further study, it should be studied whether there will be a similar result when the parameter of inflation gotten from our experiment is applied to other gene expression data. We are also trying to make a systematic method to improve the accuracy by regulating the factors mentioned above.

  • PDF

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Efficient Structral Safety Monitoring of Large Structures Using Substructural Identification (부분구조추정법을 이용한 대형구조물의 효율적인 구조안전도 모니터링)

  • 윤정방;이형진
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.1-15
    • /
    • 1997
  • This paper presents substructural identification methods for the assessment of local damages in complex and large structural systems. For this purpose, an auto-regressive and moving average with stochastic input (ARMAX) model is derived for a substructure to process the measurement data impaired by noises. Using the substructural methods, the number of unknown parameters for each identification can be significantly reduced, hence the convergence and accuracy of estimation can be improved. Secondly, the damage index is defined as the ratio of the current stiffness to the baseline value at each element for the damage assessment. The indirect estimation method was performed using the estimated results from the identification of the system matrices from the substructural identification. To demonstrate the proposed techniques, several simulation and experimental example analyses are carried out for structural models of a 2-span truss structure, a 3-span continuous beam model and 3-story building model. The results indicate that the present substructural identification method and damage estimation methods are effective and efficient for local damage estimation of complex structures.

  • PDF

Bending analysis of nano-Fe2O3 reinforced concrete slabs exposed to temperature fields and supported by viscoelastic foundation

  • Zouaoui R. Harrat;Mohammed Chatbi;Baghdad Krour;Sofiane Amziane;Mohamed Bachir Bouiadjra;Marijana Hadzima-Nyarko;Dorin Radu;Ercan Isik
    • Advances in concrete construction
    • /
    • v.17 no.2
    • /
    • pp.111-126
    • /
    • 2024
  • During the clinkering stages of cement production, the chemical composition of fine raw materials such as limestone and clay, which include iron oxide (Fe2O3), silicon dioxide (SiO2) and aluminum oxide (Al2O3), significantly influences the quality of the final product. Specifically, the chemical interaction of Fe2O3 with CaO, SiO2 and Al2O3 during clinkerisation plays a key role in determining the chemical reactivity and overall quality of the final cement, shaping the properties of the concrete produced. As an extension, this study aims to investigate the physical effects of incorporating nanosized Fe2O3 particles as fillers in concrete matrices, and their impact on concrete structures, namely slabs. To accurately model the reinforced concrete (RC) slabs, a refined trigonometric shear deformation theory (RTSDT) is used. Additionally, the stochastic Eshelby's homogenization approach is employed to determine the thermoelastic properties of nano-Fe2O3 infused concrete slabs. To ensure comprehensive coverage in the study, the RC slabs undergo various mechanical loads and are exposed to temperature fields to assess their thermo-mechanical performance. Furthermore, the slabs are assumed to rest on a three-parameter viscoelastic foundation, comprising the Winkler elastic springs, Pasternak shear layer and a damping parameter. The equilibrium governing equations of the system are derived using the principle of virtual work and subsequently solved using Navier's technique. The findings indicate that while ferric oxide nanoparticles enhance the mechanical properties of concrete against mechanical loading, they have less favorable effects on its performance against thermal exposure. However, the viscoelastic foundation contributes to mitigating these effects, improving the concrete's overall performance in both scenarios. These results highlight the trade-offs between mechanical and thermal performance when using Fe2O3 nanoparticles in concrete and underscore the importance of optimizing nanoparticle content and loading conditions to improve the structural performance of concrete structures.