• Title/Summary/Keyword: IID (independent and identically distributed)

Search Result 9, Processing Time 0.031 seconds

Extreme Value Analysis of Statistically Independent Stochastic Variables

  • Choi, Yongho;Yeon, Seong Mo;Kim, Hyunjoe;Lee, Dongyeon
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.3
    • /
    • pp.222-228
    • /
    • 2019
  • An extreme value analysis (EVA) is essential to obtain a design value for highly nonlinear variables such as long-term environmental data for wind and waves, and slamming or sloshing impact pressures. According to the extreme value theory (EVT), the extreme value distribution is derived by multiplying the initial cumulative distribution functions for independent and identically distributed (IID) random variables. However, in the position mooring of DNVGL, the sampled global maxima of the mooring line tension are assumed to be IID stochastic variables without checking their independence. The ITTC Recommended Procedures and Guidelines for Sloshing Model Tests never deal with the independence of the sampling data. Hence, a design value estimated without the IID check would be under- or over-estimated because of considering observations far away from a Weibull or generalized Pareto distribution (GPD) as outliers. In this study, the IID sampling data are first checked in an EVA. With no IID random variables, an automatic resampling scheme is recommended using the block maxima approach for a generalized extreme value (GEV) distribution and peaks-over-threshold (POT) approach for a GPD. A partial autocorrelation function (PACF) is used to check the IID variables. In this study, only one 5 h sample of sloshing test results was used for a feasibility study of the resampling IID variables approach. Based on this study, the resampling IID variables may reduce the number of outliers, and the statistically more appropriate design value could be achieved with independent samples.

FedGCD: Federated Learning Algorithm with GNN based Community Detection for Heterogeneous Data

  • Wooseok Shin;Jitae Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Federated learning (FL) is a ground breaking machine learning paradigm that allow smultiple participants to collaboratively train models in a cloud environment, all while maintaining the privacy of their raw data. This approach is in valuable in applications involving sensitive or geographically distributed data. However, one of the challenges in FL is dealing with heterogeneous and non-independent and identically distributed (non-IID) data across participants, which can result in suboptimal model performance compared to traditionalmachine learning methods. To tackle this, we introduce FedGCD, a novel FL algorithm that employs Graph Neural Network (GNN)-based community detection to enhance model convergence in federated settings. In our experiments, FedGCD consistently outperformed existing FL algorithms in various scenarios: for instance, in a non-IID environment, it achieved an accuracy of 0.9113, a precision of 0.8798,and an F1-Score of 0.8972. In a semi-IID setting, it demonstrated the highest accuracy at 0.9315 and an impressive F1-Score of 0.9312. We also introduce a new metric, nonIIDness, to quantitatively measure the degree of data heterogeneity. Our results indicate that FedGCD not only addresses the challenges of data heterogeneity and non-IIDness but also sets new benchmarks for FL algorithms. The community detection approach adopted in FedGCD has broader implications, suggesting that it could be adapted for other distributed machine learning scenarios, thereby improving model performance and convergence across a range of applications.

Closed Form Expression for Signal Transmission via AF Relaying over Nakagami-m Fading Channels

  • Mughal, Muhammad Ozair;Kim, Sun-Woo
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.213-214
    • /
    • 2008
  • In this paper, we analyze the performance of a cooperative communication wireless network over independent and identically distributed (IID) Nakagami-m fading channels. A simple transmission scheme is considered where the relay is operating in amplify-forward (AF) mode. A closed-form expression for symbol error rate (SER) is obtained using the moment generating function (MGF) of the total signal to noise ratio (SNR) of the transmitted signal with binary phase shift keying (BPSK).

  • PDF

Knowledge-Based Clutter Suppression Algorithm Using Cell under Test Data Only (Cell under Test 데이터만을 이용한 사전정보 기반의 클러터 억제 알고리즘)

  • Jeon, Hyeonmu;Yang, Dong-Hyeuk;Chung, Yong-Seek;Chung, Won-zoo;Kim, Jong-mann;Yang, Hoon-Gee
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.10
    • /
    • pp.825-831
    • /
    • 2017
  • Radar clutter in real environment is in general heterogeneous and especially nonstationary if radar geometry is of non-sidelooking monostatic structure or bistatic structure. These clutter properties lead to the insufficient number of secondary data of IID(Independent identically distributed) property, conclusively deteriorate clutter suppression performance. In this paper, we propose a clutter suppression algorithm that estimates the clutter signal belonging to cull under test via calculation using only prior information, rather than using the secondary data. Through analyzing the angle-Doppler spectrum of the clutter signal, we show the estimation of the clutter signal using prior information only is possible and present the derivation of a clutter suppression algorithm through eigen-value analysis. Finally, we show the performance of the proposed algorithm by simulation.

Two independent mechanisms mediate discrimination of IID textures varying in mean luminance and contrast (평균밝기와 대비성의 차원으로 구성된 결 공간에서 결 분리에 작용하는 두 가지 기제)

  • 남종호
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.3
    • /
    • pp.39-49
    • /
    • 1999
  • The space of IID([ndependently, Identically Distributed) textures was built with axes of mean luminance and contrast, and studied on what kind of mechanisms were required to mediate texture segregation in this space. The conjecture was tested that one of these mechanisms is sensitive to the differences between the means of textures to be discriminated, whereas the other is sensitive to the differences between variances. The probability of discrimination was measured for various pairs of textures in the lID space The data were well fit by a model in which discrimination depends on two mechanisms whose responses are combined by probability summation. The conjecture was rejected that two mechanisms respectively tuned to mean and variance of texture function in segregation. Discrimination within space is mediated by 2 independent channels however: the 2 independent channels are not exactly tuned to texture mean and variance. One m mechanism was primarily sensitive to texture mean, whereas the other was sensitive to b both texture mean and variance.

  • PDF

Development and Application of the Heteroscedastic Logit Model (이분산 로짓모형의 추정과 적용)

  • 양인석;노정현;김강수
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.4
    • /
    • pp.57-66
    • /
    • 2003
  • Because the Logit model easily calculates probabilities for choice alternatives and estimates parameters for explanatory variables, it is widely used as a traffic mode choice model. However, this model includes an assumption which is independently and identically distributed to the error component distribution of the mode choice utility function. This paper is a study on the estimation of the Heteroscedastic Logit Model. which mitigates this assumption. The purpose of this paper is to estimate a Logit model that more accurately reflects the mode choice behavior of passengers by resolving the homoscedasticity of the model choice utility error component. In order to do this, we introduced a scale factor that is directly related to the error component distribution of the model. This scale factor was defined so as to take into account the heteroscedasticity in the difference in travel time between using public transport and driving a car, and was used to estimate the travel time parameter. The results of the Logit Model estimation developed in this study show that Heteroscedastic Logit Models can realistically reflect the mode choice behavior of passengers, even if the difference in travel time between public and private transport remains the same as passenger travel time increases, by identifying the difference in mode choice probability of passengers for public transportation.

Integration of BIM and Simulation for optimizing productivity and construction Safety

  • Evangelos Palinginis;Ioannis Brilakis
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.21-27
    • /
    • 2013
  • Construction safety is a predominant hindrance in in-situ workflow and considered an unresolved issue. Current methods used for safety optimization and prediction, with limited exceptions, are paper-based, thus error prone, as well as time and cost ineffective. In an attempt to exploit the potential of BIM for safety, the objective of the proposed methodology is to automatically predict hazardous on-site conditions related to the route that the dozers follow during the different phases of the project. For that purpose, safety routes used by construction equipment from an origin to multiple destinations are computed using video cameras and their cycle times are calculated. The cycle times and factors; including weather and light conditions, are considered to be independent and identically distributed random variables (iid); and simulated using the Arena software. The simulation clock is set to 100 to observe the minor changes occurring due to external parameters. The validation of this technology explores the capabilities of BIM combined with simulation for enhancing productivity and improving safety conditions a-priori. Preliminary results of 262 measurements indicate that the proposed methodology has the potential to predict with 87% the location of exclusion zones. Also, the cycle time is estimated with an accuracy of 89%.

  • PDF

Multicasting Multiple Description Coding Using p-cycle Network Coding

  • Farzamnia, Ali;Syed-Yusof, Sharifah K.;Fisal, Norsheila
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.12
    • /
    • pp.3118-3134
    • /
    • 2013
  • This paper deliberates for a multimedia transmission scheme combining multiple description coding (MDC) and network coding (NC). Our goal is to take advantage from the property of MDC to provide quantized and compressed independent and identically distributed (iid) descriptions and also from the benefit of network coding, which uses network resources efficiently to recover lost data in the network. Recently, p-cycle NC has been introduced to recover and protect any lost or distorted descriptions at the receiver part exactly without need of retransmission. So far, MDC have not been explored using this type of NC. Compressed and coded descriptions are transmitted through the network where p-cycle NC is applied. P-cycle based algorithm is proposed for single and multiple descriptions lost. Results show that in the fixed bit rate, the PSNR (Peak Signal to Noise Ratio) of our reconstructed image and also subjective evaluation is improved significantly compared to previous work which is averaging method joint with MDC in order to conceal lost descriptions.

Design of Acceptance Control Charts According to the Process Independence, Data Weighting Scheme, Subgrouping, and Use of Charts (프로세스의 독립성, 데이터 가중치 체계, 부분군 형성과 관리도 용도에 따른 합격판정 관리도의 설계)

  • Choi, Sung-Woon
    • Journal of the Korea Safety Management & Science
    • /
    • v.12 no.3
    • /
    • pp.257-262
    • /
    • 2010
  • The study investigates the various Acceptance Control Charts (ACCs) based on the factors that include process independence, data weighting scheme, subgrouping, and use of control charts. USL - LSL > $6{\sigma}$ that used in the good condition processes in the ACCs are designed by considering user's perspective, producer's perspective and both perspectives. ACCs developed from the research is efficiently applied by using the simple control limit unified with APL (Acceptable Process Level), RLP (Rejectable Process Level), Type I Error $\alpha$, and Type II Error $\beta$. Sampling interval of subgroup examines i.i.d. (Identically and Independent Distributed) or auto-correlated processes. Three types of weight schemes according to the reliability of data include Shewhart, Moving Average(MA) and Exponentially Weighted Moving Average (EWMA) which are considered when designing ACCs. Two types of control charts by the purpose of improvement are also presented. Overall, $\alpha$, $\beta$ and APL for nonconforming proportion and RPL of claim proportion can be designed by practioners who emphasize productivity and claim defense cost.