• Title/Summary/Keyword: 표본수

Search Result 3,558, Processing Time 0.03 seconds

Stochastic Strength Analysis according to Initial Void Defects in Composite Materials (복합재 초기 공극 결함에 따른 횡하중 강도 확률론적 분석)

  • Seung-Min Ji;Sung-Wook Cho;S.S. Cheon
    • Composites Research
    • /
    • v.37 no.3
    • /
    • pp.179-185
    • /
    • 2024
  • This study quantitatively evaluated and investigated the changes in transverse tensile strength of unidirectional fiber-reinforced composites with initial void defects using a Representative Volume Element (RVE) model. After calculating the appropriate sample size based on margin of error and confidence level for initial void defects, a sample group of 5000 RVE models with initial void defects was generated. Dimensional reduction and density-based clustering analysis were conducted on the sample group to assess similarity, confirming and verifying that the sample group was unbiased. The validated sample analysis results were represented using a Weibull distribution, allowing them to be applied to the reliability analysis of composite structures.

수리통계학 교육에서 상호정보의 활용에 대한 연구

  • Jang, Dae-Heung
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.05a
    • /
    • pp.155-160
    • /
    • 2005
  • 상호정보를 이용하면 두 확률변수 사이의 종속의 정도를 평가할 수 있는 측도를 제시할 수 있고 두 변수 사이의 상관관계를 나타내는 표본상관계수의 단점을 보완한 일반화상관계수를 정의할 수 있다.

  • PDF

A study on the difference and calibration of empirical influence function and sample influence function (경험적 영향함수와 표본영향함수의 차이 및 보정에 관한 연구)

  • Kang, Hyunseok;Kim, Honggie
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.527-540
    • /
    • 2020
  • While analyzing data, researching outliers, which are out of the main tendency, is as important as researching data that follow the general tendency. In this study we discuss the influence function for outlier discrimination. We derive sample influence functions of sample mean, sample variance, and sample standard deviation, which were not directly derived in previous research. The results enable us to mathematically examine the relationship between the empirical influence function and sample influence function. We can also consider a method to approximate the sample influence function by the empirical influence function. Also, the validity of the relationship between the approximated sample influence function and the empirical influence function is also verified by the simulation of random sampled data in normal distribution. As the result of a simulation, both the relationship between the two influence functions, sample and empirical, and the method of approximating the sample influence function through the emperical influence function were verified. This research has significance in proposing a method that reduces errors in the approximation of the empirical influence function and in proposing an effective and practical method that proceeds from previous research that approximates the sample influence function directly through empirical influence function by constant revision.

Study on Optimal Sample Size for Bivariate Frequency Anlaysis using POT (POT 방법을 이용한 이변량 빈도해석 적정 표본크기 연구)

  • Joo, Kyungwon;Joo, Kyungwon;Joo, Kyungwon;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.38-38
    • /
    • 2015
  • 최근 다변량 확률모형을 이용한 빈도해석이 여러 수문분야에 걸쳐 연구되고 있다. 기존 일변량 빈도해석에 비해 변수활용에 대한 자유도와 물리적 현상을 정확하게 표현할 수 있다는 장점이 있으나, 표본자료의 부족, 매개변수 추정 및 적합도 검정 등의 어려움으로 실제 분야에 사용되기 어려운 점이 있다. 본 연구에서는 copula 모형에 대하여 Cramer-von Mises(CVM) 적합도 검정 시 표본자료의 적정 크기를 결정하기 위하여 Peaks-Over-Threshold(POT) 방법을 이용하였다. 서울지점의 기상청 시강우 자료를 이용하여 빈도해석을 수행하였으며, Gumbel copula 모형에 대하여 매개변수 추정은 maximum pseudolikelihood method(MPL) 방법을 이용하였다. 50년의 기록 자료에 대하여 표본크기를 50개부터 2500개까지 조절하여 CVM 통계값과 p-value를 기준으로 적정 표본크기를 산정하였다.

  • PDF

Reexamination on Foreign Collectors' Sites and Exploration Routes in Korea (III) - with respect to T. Uchiyama - (외국인의 한반도 식물 채집행적과 지명 재고(III): Tomijiro Uchiyama)

  • Kim, Hui;Choi, Byoung-Hee;Chang, Chin-Sung;Chang, Kae-Sun
    • Korean Journal of Plant Taxonomy
    • /
    • v.37 no.2
    • /
    • pp.203-215
    • /
    • 2007
  • Uchiyama, Tomijiro visited the Korean peninsula including Busan, Incheon, Nampo, Pyongyang, Seoul, Mt. Geumgang of Gangwon-do, and Jeju-do twice for his plant collections in 1900 and 1902, respectively. During his plant explorations, Uchiyama collected numerous specimens which were investigated and studied by T. Nakai (Flora Koreana I and II and other publications) and H. $L{\acute{e}}veill{\acute{e}}$ later. Unfortunately all collection sites were simply described by Nakai in Romanized characters, so that it is difficult to pinpoint those sites using the current or the old Korean map. From this study, many locality names were reviewed based on his own plant specimens at TI and literatures, and those were listed as the order of his collection dates. Based on specimens deposited at TI, only ca. 200 specimens were confirmed, although 1,674 specimens were listed by Nakai. Among his collections, 2/3 of his collections were conducted in 1902 and among them 41 specimens were cited as type collections by Nakai.

Sample Design for Materials and Components Industry Trend Survey (부품.소재산업 동향 조사의 표본설계)

  • NamKung, Pyong
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.883-897
    • /
    • 2008
  • This paper provides correct informations inflecting the present situation using the sample design in population that the National Statistical Office puts in operation of the mining and manufacturing industry statistical survey in 2006. This paper proposes new sampling design which is able to grasp business fluctuations and provide basic data for the rearing policy and management of the material industry and components industry. These sample design are the modified cut-off method and multivariate Neyman allocation using principal components and sampling method is the probability proportional systematic sampling.

A Nonuniform Sampling Technique and Its Application to Speech Coding (비균등 표본화 기법과 음성 부호화로의 응용)

  • Iem, Byeong-Gwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.28-32
    • /
    • 2014
  • For a signal such as speech showing piece-wise linear shape in a very short time period, a nonuniform sampling method based on the inflection point detection (IPD) is proposed to reduce data rate. The method exploits the geometrical characteristics of signal further than the existing local maxima/minima detection (MMD) based sampling method. As results, the reconstructed signal by the interpolation of the IPD based sampled data resembles the original speech more. Computer simulation shows that the proposed IPD based method produces about 9~23 dB improvement over the existing MMD method. To show the usefulness of the IPD technique, it is applied to speech coding, and compared to the continuously variable slope delta modulation (CVSD). The nonuniformly sampled data is binary coded with one bit flag set "1". Noninflection samples are not sent, but only flag bits set 0 are sent. The method shows 0.3 ~ 9 dB SNR and 0.5 ~ 1.3 mean opinion score (MOS) improvements over the CVSD.

A RSS-Based Localization Method Utilizing Robust Statistics for Wireless Sensor Networks under Non-Gaussian Noise (비 가우시안 잡음이 존재하는 무선 센서 네트워크에서 Robust Statistics를 활용하는 수신신호세기기반의 위치 추정 기법)

  • Ahn, Tae-Joon;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.23-30
    • /
    • 2011
  • In the wireless sensor network(WSN), the detection of precise location of sensor nodes is essential for efficiently utilizing the sensing data acquired from sensor nodes. Among various location methods, the received signal strength (RSS) based localization scheme is mostly preferable in many applications since it can be easily implemented without any additional hardware cost. Since the RSS localization method is mainly effected by radio channel between two nodes, outlier data can be included in the received signal strength measurement specially when some obstacles move around the link between nodes. The outlier data can have bad effect on estimating the distance between two nodes such that it can cause location errors. In this paper, we propose a RSS-based localization method using Robust Statistic and Gaussian filter algorithm for enhancing the accuracy of RSS-based localization. In the proposed algorithm, the outlier data can be eliminated from samples by using the Robust Statistics as well as the Gaussian filter such that the accuracy of localization can be achieved. Through simulation, it is shown that the proposed algorithm can increase the accuracy of localization and is more robust to non gaussian noise channels.

A Comparative Study on Factor Recovery of Principal Component Analysis and Common Factor Analysis (주성분분석과 공통요인분석에 대한 비교연구: 요인구조 복원 관점에서)

  • Jung, Sunho;Seo, Sangyun
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.6
    • /
    • pp.933-942
    • /
    • 2013
  • Common factor analysis and principal component analysis represent two technically distinctive approaches to exploratory factor analysis. Much of the psychometric literature recommends the use of common factor analysis instead of principal component analysis. Nonetheless, factor analysts use principal component analysis more frequently because they believe that principal component analysis could yield (relatively) less accurate estimates of factor loadings compared to common factor analysis but most often produce similar pattern of factor loadings, leading to essentially the same factor interpretations. A simulation study is conducted to evaluate the relative performance of these two approaches in terms of factor pattern recovery under different experimental conditions of sample size, overdetermination, and communality.The results show that principal component analysis performs better in factor recovery with small sample sizes (below 200). It was further shown that this tendency is more prominent when there are a small number of variables per factor. The present results are of practical use for factor analysts in the field of marketing and the social sciences.

Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers (다중 섬광결정을 이용한 고해상도 PET의 불균일/불완전 데이터 보정기법 연구)

  • Lee, Jae-Sung;Kim, Soo-Mee;Lee, Kwon-Song;Sim, Kwang-Souk;Rhe, June-Tak;Park, Kwang-Suk;Lee, Dong-Soo;Hong, Seong-Jong
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.1
    • /
    • pp.52-60
    • /
    • 2008
  • Purpose: To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Materials and Methods: Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 20 filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Results: Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conclusion: Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 20 reconstruction of multiple crystal layer PET data.