• Title/Summary/Keyword: identically distributed

Search Result 194, Processing Time 0.023 seconds

CHARACTERIZATION OF STANDARD EXTREME VALUE DISTRIBUTIONS USING RECORDS

  • Skrivankova, Valeria;Juhas, Matej
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.24 no.3
    • /
    • pp.401-407
    • /
    • 2011
  • The paper deals with characterization of standard Gumbel distribution and standard $Fr{\acute{e}}chet$ distribution and was motivated by [4], where the Weibull distribution is characterized. We present criterions using the independence of some suitable functions of lower records in a sequence of independent identically distributed random variables $\{X_n,\;n{\geq}1\}$.

ON CHARACTERIZATIONS OF THE CONTINUOUS DISTRIBUTIONS BY INDEPENDENCE PROPERTY OF THE QUOTIENT-TYPE UPPER RECORD VALUES

  • LEE, MIN-YOUNG;JIN, HYUN-WOO
    • Journal of applied mathematics & informatics
    • /
    • v.37 no.3_4
    • /
    • pp.245-249
    • /
    • 2019
  • In this paper we obtain characterizations of a family of continuous probability distribution by independence property of upper record values. Also, we introduce some examples of the characterizations of distributions from these general classes of continuous distributions.

ON CHARACTERIZATIONS OF THE WEIBULL DISTRIBUTION BY THE INDEPENDENT PROPERTY OF RECORD VALUES

  • Lee, Min-Young;Lim, Eun-Hyuk
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.23 no.2
    • /
    • pp.245-250
    • /
    • 2010
  • We present characterizations of the Weibull distribution by the independent property of record values that F(x) has a Weibull distribution if and only if $\frac{X_{U(m)}}{X_{U(n)}}$ and $X_{U(n)}$ or $\frac{X_{U(n)}}{X_{U(n)}{\pm}X_{U(m)}}$ and $X_{U(n)}$ are independent for $1{\leq}m.

Catchment Responses in Time and Space to Parameter Uncertainty in Distributed Rainfall-Runoff Modeling (분포형 강우-유출 모형의 매개변수 불확실성에 대한 시.공간적 유역 응답)

  • Lee, Gi-Ha;Takara, Kaoru;Tachikawa, Yasuto;Sayama, Takahiro
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.2215-2219
    • /
    • 2009
  • For model calibration in rainfall-runoff modeling, streamflow data at a specific outlet is obviously required but is not sufficient to identify parameters of a model since numerous parameter combinations can result in very similar model performance measures (i.e. objective functions) and indistinguishable simulated hydrographs. This phenomenon has been called 'equifinality' due to inherent parameter uncertainty involved in rainfall-runoff modeling. This study aims to investigate catchment responses in time and space to various uncertain parameter sets in distributed rainfall-runoff modeling. Seven plausible (or behavioral) parameter sets, which guarantee identically-good model performances, were sampled using deterministic and stochastic optimization methods entitled SCE and SCEM, respectively. Then, we applied them to a computational tracer method linked with a distributed rainfall-runoff model in order to trace and visualize potential origins of streamflow at a catchment outlet. The results showed that all hydrograph simulations based on the plausible parameter sets were performed equally well while internal catchment responses to them showed totally different aspects; different parameter values led to different distributions with respect to the streamflow origins in space and time despite identical simulated hydrographs. Additional information provided by the computational tracer method may be utilized as a complementary constraint for filtering out non-physical parameter set(s) (or reducing parameter uncertainty) in distributed rainfall-runoff modeling.

  • PDF

Reweighted L1-Minimization via Support Detection (Support 검출을 통한 reweighted L1-최소화 알고리즘)

  • Lee, Hyuk;Kwon, Seok-Beop;Shim, Byong-Hyo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.134-140
    • /
    • 2011
  • Recent work in compressed sensing theory shows that $M{\times}N$ independent and identically distributed sensing matrix whose entries are drawn independently from certain probability distributions guarantee exact recovery of a sparse signal with high probability even if $M{\ll}N$. In particular, it is well understood that the $L_1$-minimization algorithm is able to recover sparse signals from incomplete measurements. In this paper, we propose a novel sparse signal reconstruction method that is based on the reweighted $L_1$-minimization via support detection.

Knowledge-Based Clutter Suppression Algorithm Using Cell under Test Data Only (Cell under Test 데이터만을 이용한 사전정보 기반의 클러터 억제 알고리즘)

  • Jeon, Hyeonmu;Yang, Dong-Hyeuk;Chung, Yong-Seek;Chung, Won-zoo;Kim, Jong-mann;Yang, Hoon-Gee
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.10
    • /
    • pp.825-831
    • /
    • 2017
  • Radar clutter in real environment is in general heterogeneous and especially nonstationary if radar geometry is of non-sidelooking monostatic structure or bistatic structure. These clutter properties lead to the insufficient number of secondary data of IID(Independent identically distributed) property, conclusively deteriorate clutter suppression performance. In this paper, we propose a clutter suppression algorithm that estimates the clutter signal belonging to cull under test via calculation using only prior information, rather than using the secondary data. Through analyzing the angle-Doppler spectrum of the clutter signal, we show the estimation of the clutter signal using prior information only is possible and present the derivation of a clutter suppression algorithm through eigen-value analysis. Finally, we show the performance of the proposed algorithm by simulation.

Development and Application of the Heteroscedastic Logit Model (이분산 로짓모형의 추정과 적용)

  • 양인석;노정현;김강수
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.4
    • /
    • pp.57-66
    • /
    • 2003
  • Because the Logit model easily calculates probabilities for choice alternatives and estimates parameters for explanatory variables, it is widely used as a traffic mode choice model. However, this model includes an assumption which is independently and identically distributed to the error component distribution of the mode choice utility function. This paper is a study on the estimation of the Heteroscedastic Logit Model. which mitigates this assumption. The purpose of this paper is to estimate a Logit model that more accurately reflects the mode choice behavior of passengers by resolving the homoscedasticity of the model choice utility error component. In order to do this, we introduced a scale factor that is directly related to the error component distribution of the model. This scale factor was defined so as to take into account the heteroscedasticity in the difference in travel time between using public transport and driving a car, and was used to estimate the travel time parameter. The results of the Logit Model estimation developed in this study show that Heteroscedastic Logit Models can realistically reflect the mode choice behavior of passengers, even if the difference in travel time between public and private transport remains the same as passenger travel time increases, by identifying the difference in mode choice probability of passengers for public transportation.

Extreme Value Analysis of Statistically Independent Stochastic Variables

  • Choi, Yongho;Yeon, Seong Mo;Kim, Hyunjoe;Lee, Dongyeon
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.3
    • /
    • pp.222-228
    • /
    • 2019
  • An extreme value analysis (EVA) is essential to obtain a design value for highly nonlinear variables such as long-term environmental data for wind and waves, and slamming or sloshing impact pressures. According to the extreme value theory (EVT), the extreme value distribution is derived by multiplying the initial cumulative distribution functions for independent and identically distributed (IID) random variables. However, in the position mooring of DNVGL, the sampled global maxima of the mooring line tension are assumed to be IID stochastic variables without checking their independence. The ITTC Recommended Procedures and Guidelines for Sloshing Model Tests never deal with the independence of the sampling data. Hence, a design value estimated without the IID check would be under- or over-estimated because of considering observations far away from a Weibull or generalized Pareto distribution (GPD) as outliers. In this study, the IID sampling data are first checked in an EVA. With no IID random variables, an automatic resampling scheme is recommended using the block maxima approach for a generalized extreme value (GEV) distribution and peaks-over-threshold (POT) approach for a GPD. A partial autocorrelation function (PACF) is used to check the IID variables. In this study, only one 5 h sample of sloshing test results was used for a feasibility study of the resampling IID variables approach. Based on this study, the resampling IID variables may reduce the number of outliers, and the statistically more appropriate design value could be achieved with independent samples.

A Study on the Sequential Regenerative Simulation (순차적인 재생적 시뮬레이션에 관한 연구)

  • JongSuk R.;HaeDuck J.
    • Journal of the Korea Society for Simulation
    • /
    • v.13 no.2
    • /
    • pp.23-34
    • /
    • 2004
  • Regenerative simulation (RS) is a method of stochastic steady-state simulation in which output data are collected and analysed within regenerative cycles (RCs). Since data collected during consecutive RCs are independent and identically distributed, there is no problem with the initial transient period in simulated processes, which is a perennial issue of concern in all other types of steady-state simulation. In this paper, we address the issue of experimental analysis of the quality of sequential regenerative simulation in the sense of the coverage of the final confidence intervals of mean values. The ultimate purpose of this study is to determine the best version of RS to be implemented in Akaroa2 [1], a fully automated controller of distributed stochastic simulation in LAN environments.

  • PDF

FedGCD: Federated Learning Algorithm with GNN based Community Detection for Heterogeneous Data

  • Wooseok Shin;Jitae Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Federated learning (FL) is a ground breaking machine learning paradigm that allow smultiple participants to collaboratively train models in a cloud environment, all while maintaining the privacy of their raw data. This approach is in valuable in applications involving sensitive or geographically distributed data. However, one of the challenges in FL is dealing with heterogeneous and non-independent and identically distributed (non-IID) data across participants, which can result in suboptimal model performance compared to traditionalmachine learning methods. To tackle this, we introduce FedGCD, a novel FL algorithm that employs Graph Neural Network (GNN)-based community detection to enhance model convergence in federated settings. In our experiments, FedGCD consistently outperformed existing FL algorithms in various scenarios: for instance, in a non-IID environment, it achieved an accuracy of 0.9113, a precision of 0.8798,and an F1-Score of 0.8972. In a semi-IID setting, it demonstrated the highest accuracy at 0.9315 and an impressive F1-Score of 0.9312. We also introduce a new metric, nonIIDness, to quantitatively measure the degree of data heterogeneity. Our results indicate that FedGCD not only addresses the challenges of data heterogeneity and non-IIDness but also sets new benchmarks for FL algorithms. The community detection approach adopted in FedGCD has broader implications, suggesting that it could be adapted for other distributed machine learning scenarios, thereby improving model performance and convergence across a range of applications.