• Title/Summary/Keyword: Finite sample distribution

Search Result 75, Processing Time 0.028 seconds

A PROPOSAL ON ALTERNATIVE SAMPLING-BASED MODELING METHOD OF SPHERICAL PARTICLES IN STOCHASTIC MEDIA FOR MONTE CARLO SIMULATION

  • KIM, SONG HYUN;LEE, JAE YONG;KIM, DO HYUN;KIM, JONG KYUNG;NOH, JAE MAN
    • Nuclear Engineering and Technology
    • /
    • v.47 no.5
    • /
    • pp.546-558
    • /
    • 2015
  • Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

미세금형 가공을 위한 전기화학식각공정의 유한요소 해석 및 실험 결과 비교

  • Ryu, Heon-Yeol;Im, Hyeon-Seung;Jo, Si-Hyeong;Hwang, Byeong-Jun;Lee, Seong-Ho;Park, Jin-Gu
    • Proceedings of the Materials Research Society of Korea Conference
    • /
    • 2012.05a
    • /
    • pp.81.2-81.2
    • /
    • 2012
  • To fabricate a metal mold for injection molding, hot-embossing and imprinting process, mechanical machining, electro discharge machining (EDM), electrochemical machining (ECM), laser process and wet etching ($FeCl_3$ process) have been widely used. However it is hard to get precise structure with these processes. Electrochemical etching has been also employed to fabricate a micro structure in metal mold. A through mask electrochemical micro machining (TMEMM) is one of the electrochemical etching processes which can obtain finely precise structure. In this process, many parameters such as current density, process time, temperature of electrolyte and distance between electrodes should be controlled. Therefore, it is difficult to predict the result because it has low reliability and reproducibility. To improve it, we investigated this process numerically and experimentally. To search the relation between processing parameters and the results, we used finite element simulation and the commercial finite element method (FEM) software ANSYS was used to analyze the electric field. In this study, it was supposed that the anodic dissolution process is predicted depending on the current density which is one of major parameters with finite element method. In experiment, we used stainless steel (SS304) substrate with various sized square and circular array patterns as an anode and copper (Cu) plate as a cathode. A mixture of $H_2SO_4$, $H_3PO_4$ and DIW was used as an electrolyte. After electrochemical etching process, we compared the results of experiment and simulation. As a result, we got the current distribution in the electrolyte and line profile of current density of the patterns from simulation. And etching profile and surface morphologies were characterized by 3D-profiler(${\mu}$-surf, Nanofocus, Germany) and FE-SEM(S-4800, Hitachi, Japan) measurement. From comparison of these data, it was confirmed that current distribution and line profile of the patterns from simulation are similar to surface morphology and etching profile of the sample from the process, respectively. Then we concluded that current density is more concentrated at the edge of pattern and the depth of etched area is proportional to current density.

  • PDF

Development of A Recovery Algorithm for Sparse Signals based on Probabilistic Decoding (확률적 희소 신호 복원 알고리즘 개발)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.409-416
    • /
    • 2017
  • In this paper, we consider a framework of compressed sensing over finite fields. One measurement sample is obtained by an inner product of a row of a sensing matrix and a sparse signal vector. A recovery algorithm proposed in this study for sparse signals based probabilistic decoding is used to find a solution of compressed sensing. Until now compressed sensing theory has dealt with real-valued or complex-valued systems, but for the processing of the original real or complex signals, the loss of the information occurs from the discretization. The motivation of this work can be found in efforts to solve inverse problems for discrete signals. The framework proposed in this paper uses a parity-check matrix of low-density parity-check (LDPC) codes developed in coding theory as a sensing matrix. We develop a stochastic algorithm to reconstruct sparse signals over finite field. Unlike LDPC decoding, which is published in existing coding theory, we design an iterative algorithm using probability distribution of sparse signals. Through the proposed recovery algorithm, we achieve better reconstruction performance as the size of finite fields increases. Since the sensing matrix of compressed sensing shows good performance even in the low density matrix such as the parity-check matrix, it is expected to be actively used in applications considering discrete signals.

Optimization of Data Recovery using Non-Linear Equalizer in Cellular Mobile Channel (셀룰라 이동통신 채널에서 비선형 등화기를 이용한 최적의 데이터 복원)

  • Choi, Sang-Ho;Ho, Kwang-Chun;Kim, Yung-Kwon
    • Journal of IKEEE
    • /
    • v.5 no.1 s.8
    • /
    • pp.1-7
    • /
    • 2001
  • In this paper, we have investigated the CDMA(Code Division Multiple Access) Cellular System with non-linear equalizer in reverse link channel. In general, due to unknown characteristics of channel in the wireless communication, the distribution of the observables cannot be specified by a finite set of parameters; instead, we partitioned the m-dimensional sample space Into a finite number of disjointed regions by using quantiles and a vector quantizer based on training samples. The algorithm proposed is based on a piecewise approximation to regression function based on quantiles and conditional partition moments which are estimated by Robbins Monro Stochastic Approximation (RMSA) algorithm. The resulting equalizers and detectors are robust in the sense that they are insensitive to variations in noise distributions. The main idea is that the robust equalizers and robust partition detectors yield better performance in equiprobably partitioned subspace of observations than the conventional equalizer in unpartitioned observation space under any condition. And also, we apply this idea to the CDMA system and analyze the BER performance.

  • PDF

Estimation of smooth monotone frontier function under stochastic frontier model (확률프런티어 모형하에서 단조증가하는 매끄러운 프런티어 함수 추정)

  • Yoon, Danbi;Noh, Hohsuk
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.665-679
    • /
    • 2017
  • When measuring productive efficiency, often it is necessary to have knowledge of the production frontier function that shows the maximum possible output of production units as a function of inputs. Canonical parametric forms of the frontier function were initially considered under the framework of stochastic frontier model; however, several additional nonparametric methods have been developed over the last decade. Efforts have been recently made to impose shape constraints such as monotonicity and concavity on the non-parametric estimation of the frontier function; however, most existing methods along that direction suffer from unnecessary non-smooth points of the frontier function. In this paper, we propose methods to estimate the smooth frontier function with monotonicity for stochastic frontier models and investigate the effect of imposing a monotonicity constraint into the estimation of the frontier function and the finite dimensional parameters of the model. Simulation studies suggest that imposing the constraint provide better performance to estimate the frontier function, especially when the sample size is small or moderate. However, no apparent gain was observed concerning the estimation of the parameters of the error distribution regardless of sample size.

Simulation of Honeycomb-Structured SiC Heating Elements (허니컴 구조 SiC 발열체 성능 평가 시뮬레이션)

  • Lee, Jong-Hyuk;Cho, Youngjae;Kim, Chanyoung;Kwon, Yongwoo;Kong, Young-Min
    • Korean Journal of Materials Research
    • /
    • v.25 no.9
    • /
    • pp.450-454
    • /
    • 2015
  • A simulation method to estimate microstructure dependent material properties and their influence on performance for a honeycomb structured SiC heating element has been established. Electrical and thermal conductivities of a porous SiC sample were calculated by solving a current continuity equation. Then, the results were used as input parameters for a finite element analysis package to predict temperature distribution when the heating element was subjected to a DC bias. Based on the simulation results, a direction of material development for better heating efficiency was found. In addition, a modified metal electrode scheme to decelerate corrosion kinetics was proposed, by which the durability of the water heating system was greatly improved.

Statistical Properties of Business Survey Index (기업경기실사지수의 통계적 성질 고찰)

  • Kim, Kyu-Seong
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.263-274
    • /
    • 2010
  • Business survey index(BSI) is an economic forecasting index made on the basis of the past achievement of the company and enterpriser's plan and decision for the future. Even the index is very popular in economic situations, only a little research result is known to the public. In the paper we investigate statistical properties of BSI. We define population BSI in the finite population and estimate it unbiasedly. Also we derive the variance of the estimated BSI and its unbiased estimator. In addition, confidence interval of the estimated BSI is proposed. We asserte that confidence interval of the estimated BSI is more reasonable than the relative standard error.

Performance evaluation of smart prefabricated concrete elements

  • Zonta, Daniele;Pozzi, Matteo;Bursi, Oreste S.
    • Smart Structures and Systems
    • /
    • v.3 no.4
    • /
    • pp.475-494
    • /
    • 2007
  • This paper deals with the development of an innovative distributed construction system based on smart prefabricated concrete elements for the real-time condition assessment of civil infrastructure. So far, two reduced-scale prototypes have been produced, each consisting of a $0.2{\times}0.3{\times}5.6$ m RC beam specifically designed for permanent instrumentation with 8 long-gauge Fiber Optic Sensors (FOS) at the lower edge. The sensing system is Fiber Bragg Grating (FBG)-based and can measure finite displacements both static and dynamic with a sample frequency of 625 Hz per channel. The performance of the system underwent validation in the laboratory. The scope of the experiment was to correlate changes in the dynamic response of the beams with different damage scenarios, using a direct modal strain approach. Each specimen was dynamically characterized in the undamaged state and in various damage conditions, simulating different cracking levels and recurrent deterioration scenarios, including cover spalling and corrosion of the reinforcement. The location and the extent of damage are evaluated by calculating damage indices which take account of changes in frequency and in strain-mode-shapes. The outcomes of the experiment demonstrate how the damage distribution detected by the system is fully compatible with the damage extent appraised by inspection.

Auto Regulated Data Provisioning Scheme with Adaptive Buffer Resilience Control on Federated Clouds

  • Kim, Byungsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5271-5289
    • /
    • 2016
  • On large-scale data analysis platforms deployed on cloud infrastructures over the Internet, the instability of the data transfer time and the dynamics of the processing rate require a more sophisticated data distribution scheme which maximizes parallel efficiency by achieving the balanced load among participated computing elements and by eliminating the idle time of each computing element. In particular, under the constraints that have the real-time and limited data buffer (in-memory storage) are given, it needs more controllable mechanism to prevent both the overflow and the underflow of the finite buffer. In this paper, we propose an auto regulated data provisioning model based on receiver-driven data pull model. On this model, we provide a synchronized data replenishment mechanism that implicitly avoids the data buffer overflow as well as explicitly regulates the data buffer underflow by adequately adjusting the buffer resilience. To estimate the optimal size of buffer resilience, we exploits an adaptive buffer resilience control scheme that minimizes both data buffer space and idle time of the processing elements based on directly measured sample path analysis. The simulation results show that the proposed scheme provides allowable approximation compared to the numerical results. Also, it is suitably efficient to apply for such a dynamic environment that cannot postulate the stochastic characteristic for the data transfer time, the data processing rate, or even an environment where the fluctuation of the both is presented.

Minimum Message Length and Classical Methods for Model Selection in Univariate Polynomial Regression

  • Viswanathan, Murlikrishna;Yang, Young-Kyu;WhangBo, Taeg-Keun
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.747-758
    • /
    • 2005
  • The problem of selection among competing models has been a fundamental issue in statistical data analysis. Good fits to data can be misleading since they can result from properties of the model that have nothing to do with it being a close approximation to the source distribution of interest (for example, overfitting). In this study we focus on the preference among models from a family of polynomial regressors. Three decades of research has spawned a number of plausible techniques for the selection of models, namely, Akaike's Finite Prediction Error (FPE) and Information Criterion (AIC), Schwartz's criterion (SCH), Generalized Cross Validation (GCV), Wallace's Minimum Message Length (MML), Minimum Description Length (MDL), and Vapnik's Structural Risk Minimization (SRM). The fundamental similarity between all these principles is their attempt to define an appropriate balance between the complexity of models and their ability to explain the data. This paper presents an empirical study of the above principles in the context of model selection, where the models under consideration are univariate polynomials. The paper includes a detailed empirical evaluation of the model selection methods on six target functions, with varying sample sizes and added Gaussian noise. The results from the study appear to provide strong evidence in support of the MML- and SRM- based methods over the other standard approaches (FPE, AIC, SCH and GCV).

  • PDF