• 제목/요약/키워드: Finite sample distribution

검색결과 75건 처리시간 0.032초

A PROPOSAL ON ALTERNATIVE SAMPLING-BASED MODELING METHOD OF SPHERICAL PARTICLES IN STOCHASTIC MEDIA FOR MONTE CARLO SIMULATION

  • KIM, SONG HYUN;LEE, JAE YONG;KIM, DO HYUN;KIM, JONG KYUNG;NOH, JAE MAN
    • Nuclear Engineering and Technology
    • /
    • 제47권5호
    • /
    • pp.546-558
    • /
    • 2015
  • Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

미세금형 가공을 위한 전기화학식각공정의 유한요소 해석 및 실험 결과 비교

  • 류헌열;임현승;조시형;황병준;이성호;박진구
    • 한국재료학회:학술대회논문집
    • /
    • 한국재료학회 2012년도 춘계학술발표대회
    • /
    • pp.81.2-81.2
    • /
    • 2012
  • To fabricate a metal mold for injection molding, hot-embossing and imprinting process, mechanical machining, electro discharge machining (EDM), electrochemical machining (ECM), laser process and wet etching ($FeCl_3$ process) have been widely used. However it is hard to get precise structure with these processes. Electrochemical etching has been also employed to fabricate a micro structure in metal mold. A through mask electrochemical micro machining (TMEMM) is one of the electrochemical etching processes which can obtain finely precise structure. In this process, many parameters such as current density, process time, temperature of electrolyte and distance between electrodes should be controlled. Therefore, it is difficult to predict the result because it has low reliability and reproducibility. To improve it, we investigated this process numerically and experimentally. To search the relation between processing parameters and the results, we used finite element simulation and the commercial finite element method (FEM) software ANSYS was used to analyze the electric field. In this study, it was supposed that the anodic dissolution process is predicted depending on the current density which is one of major parameters with finite element method. In experiment, we used stainless steel (SS304) substrate with various sized square and circular array patterns as an anode and copper (Cu) plate as a cathode. A mixture of $H_2SO_4$, $H_3PO_4$ and DIW was used as an electrolyte. After electrochemical etching process, we compared the results of experiment and simulation. As a result, we got the current distribution in the electrolyte and line profile of current density of the patterns from simulation. And etching profile and surface morphologies were characterized by 3D-profiler(${\mu}$-surf, Nanofocus, Germany) and FE-SEM(S-4800, Hitachi, Japan) measurement. From comparison of these data, it was confirmed that current distribution and line profile of the patterns from simulation are similar to surface morphology and etching profile of the sample from the process, respectively. Then we concluded that current density is more concentrated at the edge of pattern and the depth of etched area is proportional to current density.

  • PDF

확률적 희소 신호 복원 알고리즘 개발 (Development of A Recovery Algorithm for Sparse Signals based on Probabilistic Decoding)

  • 성진택
    • 한국정보전자통신기술학회논문지
    • /
    • 제10권5호
    • /
    • pp.409-416
    • /
    • 2017
  • 본 논문은 유한체(finite fields)에서 압축센싱(compressed sensing) 프레임워크를 살펴본다. 하나의 측정 샘플은 센싱행렬의 행과 희소 신호 벡터와의 내적으로 연산되며, 본 논문에서 제안하는 확률적 희소 신호 복원 알고리즘을 이용하여 그 압축센싱의 해를 찾고자 한다. 지금까지 압축센싱은 실수(real-valued)나 복소수(complex-valued) 평면에서 주로 연구되어 왔지만, 이와 같은 원신호를 처리하는 경우 이산화 과정으로 정보의 손실이 뒤따르게 된다. 이에 대한 연구배경은 이산(discrete) 신호에 대한 희소 신호를 복원하고자 하는 노력으로 이어지고 있다. 본 연구에서 제안하는 프레임워크는 센싱행렬로써 코딩 이론에서 사용된 LDPC(Low-Density Parity-Check) 코드의 패러티체크 행렬을 이용한다. 그리고 본 연구에서 제안한 확률적 복원 알고리즘을 이용하여 유한체의 희소 신호를 복원한다. 기존의 코딩 이론에서 발표한 LDPC 복호화와는 달리 본 논문에서는 희소 신호의 확률분포를 이용한 반복적 알고리즘을 제안한다. 그리고 개발된 복원 알고리즘을 통하여 우리는 유한체의 크기가 커질수록 복원 성능이 우수한 결과를 얻었다. 압축센싱의 센싱행렬이 LDPC 패러티체크 행렬과 같은 저밀도 행렬에서도 좋은 성능을 보여줌에 따라 이산 신호를 고려한 응용 분야에서 적극적으로 활용될 것으로 기대된다.

셀룰라 이동통신 채널에서 비선형 등화기를 이용한 최적의 데이터 복원 (Optimization of Data Recovery using Non-Linear Equalizer in Cellular Mobile Channel)

  • 최상호;호광춘;김영권
    • 전기전자학회논문지
    • /
    • 제5권1호
    • /
    • pp.1-7
    • /
    • 2001
  • 본 논문에서 역 방향 링크 채널에 대해 비 선형 등화기를 이용하여 CDMA 셀룰라 시스템을 연구하였다. 일반적으로 무선 통신에서 불확실한 채널 특성 때문에 Observable 들의 확률분포는 유한 세트의 파라미터로 규정될 수 없다. 대신에 training 샘플에 기반을 둔 Quantile과 Vector Quantizer를 사용함으로서 유한 수의 disjoint된 영역으로 m차 샘플 공간으로 분할하였다. 제안된 알고리듬은 RMSA 알고리즘에 의해 예측된 Quantile와 조건부 분할 모멘트에 따른 regression function의 부분적인 근사에 근간을 두고 있다. 본 논문의 등화기와 검출기는 잡음 분포의 Variation에 민감하지 않다는 관점에서 상당히 강한 특성을 보여 준다. 주요 아이디어는 Robust equalizer와 Robust partition detector가 어떤 환경의 무선 채널 하에서도 partition되지 않은 Observation space의 일반적인 등화기 보다 Observation의 등 확률로 분할된 부 공간에서 더 낳은 성능을 보여 준다. 또한 이런 개념을 CDMA 시스템에 적용하여 BER 성능을 분석하였다.

  • PDF

확률프런티어 모형하에서 단조증가하는 매끄러운 프런티어 함수 추정 (Estimation of smooth monotone frontier function under stochastic frontier model)

  • 윤단비;노호석
    • 응용통계연구
    • /
    • 제30권5호
    • /
    • pp.665-679
    • /
    • 2017
  • 생산성 평가를 위해서는 주어진 생산 자료를 기반으로 투입 대비 최대산출량을 나타내는 최대산출량을 나타내는 생산 프런티어 곡선에 대한 정보가 필요한 경우가 많다. 이러한 프런티어 함수를 확률프런티어 모형하에서 추정하는 경우에 초기에는 프런티어 함수의 특정한 모수적 형테를 가정하는 경우가 많았다. 그러나 최근에는 프런티어 함수를 프런티어 함수가 기본적으로 만족해야 하는 단조성이나 오목성등을 만족하도록 하면서 비모수적 방법으로 추정하는 방법들이 많이 이루어졌다. 하지만, 이러한 방법들에서 얻어지는 추정량들은 프런티어 함수를 조각적 선형함수 또는 계단함수로 추정하는 특징 때문에 추정의 효율이 떨어지나가 프런티어 함수가 해석이 용이하지 않은 불연속점을 가지는 문제를 가지게 된다. 본 논문에서는 이러한 문제를 해결하기 위해 확률프런티어 모형에서 단조증가하는 매끄러운 프런티어 함수 추정법을 제시하고 제안된 추정방법이 기존의 추정방법에 비해서 가지는 추정 효율의 장점을 시뮬레이션를 통해 예시하였다.

허니컴 구조 SiC 발열체 성능 평가 시뮬레이션 (Simulation of Honeycomb-Structured SiC Heating Elements)

  • 이종혁;조영재;김찬영;권용우;공영민
    • 한국재료학회지
    • /
    • 제25권9호
    • /
    • pp.450-454
    • /
    • 2015
  • A simulation method to estimate microstructure dependent material properties and their influence on performance for a honeycomb structured SiC heating element has been established. Electrical and thermal conductivities of a porous SiC sample were calculated by solving a current continuity equation. Then, the results were used as input parameters for a finite element analysis package to predict temperature distribution when the heating element was subjected to a DC bias. Based on the simulation results, a direction of material development for better heating efficiency was found. In addition, a modified metal electrode scheme to decelerate corrosion kinetics was proposed, by which the durability of the water heating system was greatly improved.

기업경기실사지수의 통계적 성질 고찰 (Statistical Properties of Business Survey Index)

  • 김규성
    • 응용통계연구
    • /
    • 제23권2호
    • /
    • pp.263-274
    • /
    • 2010
  • 기업경기실사지수는 기업의 지난 실적과 기업가의 계획이나 판단 등을 기초로 하여 만들어지는 대표적인 경기 전망 지수이다. 이 지수는 경제 현장에서 널리 사용됨에도 불구하고 아직까지 규명된 통계적 성질은 많지 않다. 본 논문에서는 기업경기실사지수에 대한 통계적 성질을 고찰한다. 유한모집단에서 모집단 기업경기실사지수를 정의하고 단순확률표집을 가정한 후 모집단 기업경기실사지수 추정량을 구하며, 기업경기실사지수의 기댓값, 분산, 비편향 분산 추정량, 신뢰구간, 상대표준오차를 구한다. 그리고 기업경기실사지수의 평가지표로서 상대표준오차보다는 신뢰구간이 더 합리적임을 언급한다.

Performance evaluation of smart prefabricated concrete elements

  • Zonta, Daniele;Pozzi, Matteo;Bursi, Oreste S.
    • Smart Structures and Systems
    • /
    • 제3권4호
    • /
    • pp.475-494
    • /
    • 2007
  • This paper deals with the development of an innovative distributed construction system based on smart prefabricated concrete elements for the real-time condition assessment of civil infrastructure. So far, two reduced-scale prototypes have been produced, each consisting of a $0.2{\times}0.3{\times}5.6$ m RC beam specifically designed for permanent instrumentation with 8 long-gauge Fiber Optic Sensors (FOS) at the lower edge. The sensing system is Fiber Bragg Grating (FBG)-based and can measure finite displacements both static and dynamic with a sample frequency of 625 Hz per channel. The performance of the system underwent validation in the laboratory. The scope of the experiment was to correlate changes in the dynamic response of the beams with different damage scenarios, using a direct modal strain approach. Each specimen was dynamically characterized in the undamaged state and in various damage conditions, simulating different cracking levels and recurrent deterioration scenarios, including cover spalling and corrosion of the reinforcement. The location and the extent of damage are evaluated by calculating damage indices which take account of changes in frequency and in strain-mode-shapes. The outcomes of the experiment demonstrate how the damage distribution detected by the system is fully compatible with the damage extent appraised by inspection.

Auto Regulated Data Provisioning Scheme with Adaptive Buffer Resilience Control on Federated Clouds

  • Kim, Byungsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권11호
    • /
    • pp.5271-5289
    • /
    • 2016
  • On large-scale data analysis platforms deployed on cloud infrastructures over the Internet, the instability of the data transfer time and the dynamics of the processing rate require a more sophisticated data distribution scheme which maximizes parallel efficiency by achieving the balanced load among participated computing elements and by eliminating the idle time of each computing element. In particular, under the constraints that have the real-time and limited data buffer (in-memory storage) are given, it needs more controllable mechanism to prevent both the overflow and the underflow of the finite buffer. In this paper, we propose an auto regulated data provisioning model based on receiver-driven data pull model. On this model, we provide a synchronized data replenishment mechanism that implicitly avoids the data buffer overflow as well as explicitly regulates the data buffer underflow by adequately adjusting the buffer resilience. To estimate the optimal size of buffer resilience, we exploits an adaptive buffer resilience control scheme that minimizes both data buffer space and idle time of the processing elements based on directly measured sample path analysis. The simulation results show that the proposed scheme provides allowable approximation compared to the numerical results. Also, it is suitably efficient to apply for such a dynamic environment that cannot postulate the stochastic characteristic for the data transfer time, the data processing rate, or even an environment where the fluctuation of the both is presented.

Minimum Message Length and Classical Methods for Model Selection in Univariate Polynomial Regression

  • Viswanathan, Murlikrishna;Yang, Young-Kyu;WhangBo, Taeg-Keun
    • ETRI Journal
    • /
    • 제27권6호
    • /
    • pp.747-758
    • /
    • 2005
  • The problem of selection among competing models has been a fundamental issue in statistical data analysis. Good fits to data can be misleading since they can result from properties of the model that have nothing to do with it being a close approximation to the source distribution of interest (for example, overfitting). In this study we focus on the preference among models from a family of polynomial regressors. Three decades of research has spawned a number of plausible techniques for the selection of models, namely, Akaike's Finite Prediction Error (FPE) and Information Criterion (AIC), Schwartz's criterion (SCH), Generalized Cross Validation (GCV), Wallace's Minimum Message Length (MML), Minimum Description Length (MDL), and Vapnik's Structural Risk Minimization (SRM). The fundamental similarity between all these principles is their attempt to define an appropriate balance between the complexity of models and their ability to explain the data. This paper presents an empirical study of the above principles in the context of model selection, where the models under consideration are univariate polynomials. The paper includes a detailed empirical evaluation of the model selection methods on six target functions, with varying sample sizes and added Gaussian noise. The results from the study appear to provide strong evidence in support of the MML- and SRM- based methods over the other standard approaches (FPE, AIC, SCH and GCV).

  • PDF