• Title/Summary/Keyword: Finite sample distribution

Search Result 75, Processing Time 0.024 seconds

Three-dimensional numerical simulation and cracking analysis of fiber-reinforced cement-based composites

  • Huang, Jun;Huang, Peiyan
    • Computers and Concrete
    • /
    • v.8 no.3
    • /
    • pp.327-341
    • /
    • 2011
  • Three-dimensional graphic objects created by MATLAB are exported to the AUTOCAD program through the MATLAB handle functions. The imported SAT format files are used to produce the finite element mesh for MSC.PATRAN. Based on the Monte-Carlo random sample principle, the material heterogeneity of cement composites with randomly distributed fibers is described by the WEIBULL distribution function. In this paper, a concept called "soft region" including micro-defects, micro-voids, etc. is put forward for the simulation of crack propagation in fiber-reinforced cement composites. The performance of the numerical model is demonstrated by several examples involving crack initiation and growth in the composites under three-dimensional stress conditions: tensile loading; compressive loading and crack growth along a bimaterial interface.

The System of Non-Linear Detector over Wireless Communication (무선통신에서의 Non-Linear Detector System 설계)

  • 공형윤
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.106-109
    • /
    • 1998
  • Wireless communication systems, in particular, must operate in a crowded electro-magnetic environmnet where in-band undesired signals are treated as noise by the receiver. These interfering signals are often random but not Gaussian Due to nongaussian noise, the distribution of the observables cannot be specified by a finite set of parameters; instead r-dimensioal sample space (pure noise samples) is equiprobably partitioned into a finite number of disjointed regions using quantiles and a vector quantizer based on training samples. If we assume that the detected symbols are correct, then we can observe the pure noise samples during the training and transmitting mode. The algorithm proposed is based on a piecewise approximation to a regression function based on quantities and conditional partition moments which are estimated by a RMSA (Robbins-Monro Stochastic Approximation) algorithm. In this paper, we develop a diversity combiner with modified detector, called Non-Linear Detector, and the receiver has a differential phase detector in each diversity branch and at the combiner each detector output is proportional to the second power of the envelope of branches. Monte-Carlo simulations were used as means of generating the system performance.

  • PDF

Development of Numerical Analysis Program Considering Variation of Soil Properties During Electrokinetic Remediation (Electrokinetic 정화 처리 중 토체내의 특성변화를 고려한 수치해석 프로그램 개발)

  • 한상재;김치열;김수삼
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2001.04a
    • /
    • pp.202-205
    • /
    • 2001
  • In this study, the electrokinetic remediation test for the kaolin contaminated by lead was performed and voltage, zeta potential, pH distribution, current, contamination transport in soil sample were studied and finite differential numerical analysis program(HERP) for a contaminated soil were compared with those of test. From the result of HERP, in the anode it was represented that the rest concentration was decreased with the voltage. Hence, if treatment time was continued for a long in the constant voltage, comparing with sample having no change in the rest concentration, it is considered that the voltage gradient is the control factor of the rest concentration.

  • PDF

Power Comparison between Methods of Empirical Process and a Kernel Density Estimator for the Test of Distribution Change (분포변화 검정에서 경험확률과정과 커널밀도함수추정량의 검정력 비교)

  • Na, Seong-Ryong;Park, Hyeon-Ah
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.245-255
    • /
    • 2011
  • There are two nonparametric methods that use empirical distribution functions and probability density estimators for the test of the distribution change of data. In this paper we investigate the two methods precisely and summarize the results of previous research. We assume several probability models to make a simulation study of the change point analysis and to examine the finite sample behavior of the two methods. Empirical powers are compared to verify which is better for each model.

Envelope empirical likelihood ratio for the difference of two location parameters with constraints of symmetry

  • Kim, Kyoung-Mi;Zhou, Mai
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2002.06a
    • /
    • pp.51-73
    • /
    • 2002
  • Empirical likelihood ratio method is a new technique in nonparametric inference developed by A. Owen (1988, 2001). Sometimes empirical likelihood has difficulties to define itself. As such a case in point, we discuss the way to define a modified empirical likelihood for the location of symmetry using well-known points of symmetry as a side conditions. The side condition of symmetry is defined through a finite subset of the infinite set of constraints. The modified empirical likelihood under symmetry studied in this paper is to construct a constrained parameter space $\theta+$ of distributions imposing known symmetry as side information. We show that the usual asymptotic theory (Wilks theorem) still hold for the empirical likelihood ratio on the constrained parameter space and the asymptotic distribution of the empirical NPMLE of difference of two symmetric points is obtained.

  • PDF

An Accelerated Life Test for Burnout of Tungsten Filament of Incandescent Lamp (텅스텐 백열전구의 필라멘트 단선에 대한 가속수명시험)

  • Kim Jin-Woo;Shin Jae-Chul;Kim Myung-Soo;Lee Jae-Kook
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.7 s.238
    • /
    • pp.921-929
    • /
    • 2005
  • This paper presents an accelerated life test for burnout of tungsten filament of incandescent lamp. From failure analyses of field samples, it is shown that their root causes are local heating or hot spots in the filament caused by tungsten evaporation and wire sag. Finite element analysis is performed to evaluate the effect of vibration and impact for burnout, but any points of stress concentration or structural weakness are not found in the sample. To estimate the burnout life of lamp, an accelerated life test is planned by using quality function deployment and fractional factorial design, where voltage, vibration, and temperature are selected as accelerating variables. We assumed that Weibull lifetime distribution and a generalized linear model of life-stress relationship hold through goodness of fit test and test for common shape parameter of the distribution. Using accelerated life testing software, we estimated the common shape parameter of Weibull distribution, life-stress relationship, and accelerating factor.

Thermal Behavior Variations in Coating Thickness Using Pulse Phase Thermography

  • Ranjit, Shrestha;Chung, Yoonjae;Kim, Wontae
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.36 no.4
    • /
    • pp.259-265
    • /
    • 2016
  • This paper presents a study on the use of pulsed phase thermography in the measurement of thermal barrier coating thickness with a numerical simulation. A multilayer heat transfer model was ussed to analyze the surface temperature response acquired from one-sided pulsed thermal imaging. The test sample comprised four layers: the metal substrate, bond coat, thermally grown oxide and the top coat. The finite element software, ANSYS, was used to model and predict the temperature distribution in the test sample under an imposed heat flux on the exterior of the TBC. The phase image was computed with the use of the software MATLAB and Thermofit Pro using a Fourier transform. The relationship between the coating thickness and the corresponding phase angle was then established with the coating thickness being expressed as a function of the phase angle. The method is successfully applied to measure the coating thickness that varied from 0.25 mm to 1.5 mm.

Estimation of the Number of Sources Based on Hypothesis Testing

  • Xiao, Manlin;Wei, Ping;Tai, Heng-Ming
    • Journal of Communications and Networks
    • /
    • v.14 no.5
    • /
    • pp.481-486
    • /
    • 2012
  • Accurate and efficient estimation of the number of sources is critical for providing the parameter of targets in problems of array signal processing and blind source separation among other such problems. When conventional estimators work in unfavorable scenarios, e.g., at low signal-to-noise ratio (SNR), with a small number of snapshots, or for sources with a different strength, it is challenging to maintain good performance. In this paper, the detection limit of the minimum description length (MDL) estimator and the signal strength required for reliable detection are first discussed. Though a comparison, we analyze the reason that performances of classical estimators deteriorate completely in unfavorable scenarios. After discussing the limiting distribution of eigenvalues of the sample covariance matrix, we propose a new approach for estimating the number of sources which is based on a sequential hypothesis test. The new estimator performs better in unfavorable scenarios and is consistent in the traditional asymptotic sense. Finally, numerical evaluations indicate that the proposed estimator performs well when compared with other traditional estimators at low SNR and in the finite sample size case, especially when weak signals are superimposed on the strong signals.

A New Remeshing Technique of Tetrahedral Elements by Redistribution of Nodes in Subdomains and its Application to the Finite Element Analysis (영역별 절점 재분포를 통한 사면체 격자 재구성 방법 및 유한요소해석에의 적용)

  • Hong J.T.;Lee S.R.;Yang D.Y.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.607-610
    • /
    • 2005
  • A remeshing algorithm using tetrahedral elements has been developed, which is adapted to the mesh density map constructed by a posteriori error estimation. In the finite element analyses of metal forging processes, numerical error increases as deformation proceeds due to severe distortion of elements. In order to reduce the numerical error, the desired mesh sizes in each region of the workpiece are calculated by a posteriori error estimation and the density map is constructed. Piecewise density functions are then constructed with the radial basis function in order to interpolate the discrete data of the density map. The sample mesh is constructed based on the point insertion technique which is adapted to the density function and the mesh size is controlled by moving and deleting nodes to obtain optimal distribution according to the mesh density function and the quality optimization function as well. After finishing the redistribution process of nodes, a tetrahedral mesh is constructed with the redistributed nodes, which is adapted to the density map and resulting in good mesh quality. A goodness and adaptability of the constructed mesh is verified with a testing measure. The proposed remeshing technique is applied to the finite element analyses of forging processes.

  • PDF

Estimation of the number of discontinuity points based on likelihood (가능도함수를 이용한 불연속점 수의 추정)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.1
    • /
    • pp.51-59
    • /
    • 2010
  • In the case that the regression function has a discontinuity point in generalized linear model, Huh (2009) estimated the location and jump size using the log-likelihood weighted the one-sided kernel function. In this paper, we consider estimation of the unknown number of the discontinuity points in the regression function. The proposed algorithm is based on testing of the existence of a discontinuity point coming from the asymptotic distribution of the estimated jump size described in Huh (2009). The finite sample performance is illustrated by simulated example.