• 제목/요약/키워드: joint distribution probability

검색결과 101건 처리시간 0.029초

Fatigue Strength Assessment of Spot-Welded Lap Joint Using Strain Energy Density Factor

  • Sohn, Ilseon;Bae, Dongho
    • Journal of Mechanical Science and Technology
    • /
    • 제15권1호
    • /
    • pp.44-51
    • /
    • 2001
  • One of the recent issues in design of the spot-welded structure such as the automobile body is to develop an economical prediction method of the fatigue design criterion without additional fatigue test. In this paper, as one of basic investigation for developing such methods, fracture mechanical approach was investigated. First, the Model I, Mode II and Mode III, stress intensity factors were analyzed. Second, strain energy density factor (S) synthetically including them was calculated. And finally, in order to decide the systematic fatigue design criterion by using this strain energy density factor, fatigue data of the ΔP-N(sub)f obtained on the various in-plane bending type spot-welded lap joints were systematically re-arranged in the ΔS-N(sub)f relation. And its utility and reliability were verified by the theory of Weibull probability distribution function. The reliability of the proposed fatigue life prediction value at 10(sup)7 cycles by the strain energy density factor was estimated by 85%. Therefore, it is possible to decide the fatigue design criterion of spot-welded lap joint instead of the ΔP-N(sub)f relation.

  • PDF

Estimation of Suitable Methodology for Determining Weibull Parameters for the Vortex Shedding Analysis of Synovial Fluid

  • Singh, Nishant Kumar;Sarkar, A.;Deo, Anandita;Gautam, Kirti;Rai, S.K.
    • Journal of Biomedical Engineering Research
    • /
    • 제37권1호
    • /
    • pp.21-30
    • /
    • 2016
  • Weibull distribution with two parameters, shape (k) and scale (s) parameters are used to model the fatigue failure analysis due to periodic vortex shedding of the synovial fluid in knee joints. In order to determine the later parameter, a suitable statistical model is required for velocity distribution of synovial fluid flow. Hence, wide applicability of Weibull distribution in life testing and reliability analysis can be applied to describe the probability distribution of synovial fluid flow velocity. In this work, comparisons of three most widely used methods for estimating Weibull parameters are carried out; i.e. the least square estimation method (LSEM), maximum likelihood estimator (MLE) and the method of moment (MOM), to study fatigue failure of bone joint due to periodic vortex shedding of synovial fluid. The performances of these methods are compared through the analysis of computer generated synovial fluidflow velocity distribution in the physiological range. Significant values for the (k) and (s) parameters are obtained by comparing these methods. The criterions such as root mean square error (RMSE), coefficient of determination ($R^2$), maximum error between the cumulative distribution functions (CDFs) or Kolmogorov-Smirnov (K-S) and the chi square tests are used for the comparison of the suitability of these methods. The results show that maximum likelihood method performs well for most of the cases studied and hence recommended.

ANALYSIS OF TWO COMMODITY MARKOVIAN INVENTORY SYSTEM WITH LEAD TIME

  • Anbazhagan, N.;Arivarignan, G.
    • Journal of applied mathematics & informatics
    • /
    • 제8권2호
    • /
    • pp.519-530
    • /
    • 2001
  • A two commodity continuous review inventory system with independent Poisson processes for the demands is considered in this paper. The maximum inventory level for the i-th commodity fixed as $S_i$(i = 1,2). The net inventory level at time t for the i-th commodity is denoted by $I_i(t),\;i\;=\;1,2$. If the total net inventory level $I(t)\;=\;I_1(t)+I_2(t)$ drops to a prefixed level s $[{\leq}\;\frac{({S_1}-2}{2}\;or\;\frac{({S_2}-2}{2}]$, an order will be placed for $(S_{i}-s)$ units of i-th commodity(i=1,2). The probability distribution for inventory level and mean reorders and shortage rates in the steady state are computed. Numerical illustrations of the results are also provided.

Statistical Analysis of K-League Data using Poisson Model

  • Kim, Yang-Jin
    • The Korean Journal of Applied Statistics
    • /
    • 제25권5호
    • /
    • pp.775-783
    • /
    • 2012
  • Several statistical models for bivariate poisson data are suggested and used to analyze 2011 K-league data. Our interest is composed of two purposes: The first purpose is to exploit potential attacking and defensive abilities of each team. Particular, a bivariate poisson model with diagonal inflation is incorporated for the estimation of draws. A joint model is applied to estimate an association between poisson distribution and probability of draw. The second one is to investigate causes on scoring time of goals and a regression technique of recurrent event data is applied. Some related future works are suggested.

A Density Peak Clustering Algorithm Based on Information Bottleneck

  • Yongli Liu;Congcong Zhao;Hao Chao
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.778-790
    • /
    • 2023
  • Although density peak clustering can often easily yield excellent results, there is still room for improvement when dealing with complex, high-dimensional datasets. One of the main limitations of this algorithm is its reliance on geometric distance as the sole similarity measurement. To address this limitation, we draw inspiration from the information bottleneck theory, and propose a novel density peak clustering algorithm that incorporates this theory as a similarity measure. Specifically, our algorithm utilizes the joint probability distribution between data objects and feature information, and employs the loss of mutual information as the measurement standard. This approach not only eliminates the potential for subjective error in selecting similarity method, but also enhances performance on datasets with multiple centers and high dimensionality. To evaluate the effectiveness of our algorithm, we conducted experiments using ten carefully selected datasets and compared the results with three other algorithms. The experimental results demonstrate that our information bottleneck-based density peaks clustering (IBDPC) algorithm consistently achieves high levels of accuracy, highlighting its potential as a valuable tool for data clustering tasks.

Study on Predictable Program of Fire.Explosion Accident Using Poisson Distribution Function & Societal Risk Criteria in City Gas (Poisson분포를 이용한 도시가스 화재 폭발사고의 발생 예측프로그램 및 사회적 위험기준에 관한 연구)

  • Ko, Jae-Sun;Kim, Hyo;Lee, Su-Kyoung
    • Fire Science and Engineering
    • /
    • 제20권1호
    • /
    • pp.6-14
    • /
    • 2006
  • The data of city gas accidents has been collected and analysed for not only predictions of the fire and explosion accidents but also the criteria of societal risk. The accidents of the recent 11 years have been broken up into such 3 groups roughly as "release", "explosion", "fire" d 16 groups in detail. Owing to the Poisson probability distribution functions, 'careless work-explosion-pipeline' and 'joint loosening & erosion-release-pipeline' items respectively have turned out to record the lowest and most frequency among the recent 11-years accidents. And thus the proper counteractions must be carried out. In order to assess the societal risks tendency of the fatal gas accidents and set the more obvious safety policies up, the D. O. Hogon equation and the regression method has been used to range the acceptable range in the F-N curve of the cumulative casualties. The further works requires setting up successive database on the fire and explosion accidents systematically to obtain reliable analyses. Also the standard codification will be demanded.

Return Period Estimation of Droughts Using Drought Variables from Standardized Precipitation Index (표준강수지수 시계열의 가뭄특성치를 이용한 가뭄 재현기간 산정)

  • Kwak, Jae Won;Lee, Sung Dae;Kim, Yon Soo;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • 제46권8호
    • /
    • pp.795-805
    • /
    • 2013
  • Drought is one of the severe natural disasters and it can profoundly affect our society and ecosystem. Also, it is a very important variable for water resources planning and management. Therefore, the drought is analyzed in this study to understand the drought distribution and trend. The Standard Precipitation Index (SPI) is estimated using precipitation data obtained from 55 rain gauge stations in South Korea and the SPI based drought variables such as drought duration and drought severity were defined. Drought occurrence and joint probabilistic analysis for SPI based drought variables were performed with run theory and copula functions. And then the return period and spatial distribution of droughts on the South Korea was estimated. As the results, we have shown that Gongju and Chungju in Chungcheong-do and Wonju, Inje, Jeongseon, Taebeak in Gangwon-do have vulnerability to droughts.

A New Sampling Method of Marine Climatic Data for Infrared Signature Analysis (적외선 신호 해석을 위한 해양 기상 표본 추출법)

  • Kim, Yoonsik;Vaitekunas, David A.
    • Journal of the Society of Naval Architects of Korea
    • /
    • 제51권3호
    • /
    • pp.193-202
    • /
    • 2014
  • This paper presents a new method of sampling the climatic data for infrared signature analysis. Historical hourly data from a stationary marine buoy of KMA(Korean Meteorological Administration) are used to select a small number of sample points (N=100) to adequately cover the range of statistics(PDF, CDF) displayed by the original data set (S=56,670). The method uses a coarse bin to subdivide the variable space ($3^5$=243 bins) to make sample points cover the original data range, and a single-point ranking system to select individual points so that uniform coverage (1/N = 0.01) is obtained for each variable. The principal component analysis is used to calculate a joint probability of the coupled climatic variables. The selected sample data show good agreement to the original data set in statistical distribution and they will be used for statistical analysis of infrared signature and susceptibility of naval ships.

DISPARITY ESTIMATION/COMPENSATION OF MULTIPLE BASELINED STEREOGRAM USING MAXIMUM A POSTERIORI ALGORITHM

  • Sang-Hwa;Park, Jong-Il;Lee, Choong-Woong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 한국방송공학회 1999년도 KOBA 방송기술 워크샵 KOBA Broadcasting Technology Workshop
    • /
    • pp.49-56
    • /
    • 1999
  • In this paper, the general formula of disparity estimation based on Bayesian Maximum A Posteriori (MAP) algorithm is derived. The generalized formula is implemented with the plane configuration model and applied to multiple baselined stereograms. The probabilistic plane configuration model consists of independence and similarity among the neighboring disparities in the configuration. The independence probabilistic model reduces the computation and guarantees the discontinuity at the object boundary region. The similarity model preserves the continuity or the high correlation of disparity distribution. In addition, we propose a hierarchical scheme of disparity compensation in the application to multiple-view stereo images. According to the experiments, the derived formula and the proposed estimation algorithm outperformed other ones. The proposed probabilistic models are reasonable and approximate the pure joint probability distribution very well with decreasing the computations to O(n(D)) from O(n(D)4) of the generalized formula. And, the hierarchical scheme of disparity compensation with multiple-view stereos improves the performance without any additional overhead to the decoder.

A probabilistic information retrieval model by document ranking using term dependencies (용어간 종속성을 이용한 문서 순위 매기기에 의한 확률적 정보 검색)

  • You, Hyun-Jo;Lee, Jung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • 제32권5호
    • /
    • pp.763-782
    • /
    • 2019
  • This paper proposes a probabilistic document ranking model incorporating term dependencies. Document ranking is a fundamental information retrieval task. The task is to sort documents in a collection according to the relevance to the user query (Qin et al., Information Retrieval Journal, 13, 346-374, 2010). A probabilistic model is a model for computing the conditional probability of the relevance of each document given query. Most of the widely used models assume the term independence because it is challenging to compute the joint probabilities of multiple terms. Words in natural language texts are obviously highly correlated. In this paper, we assume a multinomial distribution model to calculate the relevance probability of a document by considering the dependency structure of words, and propose an information retrieval model to rank a document by estimating the probability with the maximum entropy method. The results of the ranking simulation experiment in various multinomial situations show better retrieval results than a model that assumes the independence of words. The results of document ranking experiments using real-world datasets LETOR OHSUMED also show better retrieval results.