• Title/Summary/Keyword: 통계적 분산

Search Result 439, Processing Time 0.033 seconds

Image Fusion Based on Statistical Hypothesis Test Using Wavelet Transform (웨이블렛 변환을 이용한 통계적 가설검정에 의한 영상융합)

  • Park, Min-Joon;Kwon, Min-Jun;Kim, Gi-Hun;Shim, Han-Seul;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.695-708
    • /
    • 2011
  • Image fusion is the process of combining multiple images of the same scene into a single fused image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and military affairs. The widely used image fusion rules that use wavelet transform have been based on a simple comparison with the activity measures of local windows such as mean and standard deviation. In this case, information features from the original images are excluded in the fusion image and distorted fusion images are obtained for noisy images. In this paper, we propose the use of a nonparametric squared ranks test on the quality of variance for two samples in order to overcome the influence of the noise and guarantee the homogeneity of the fused image. We evaluate the method both quantitatively and qualitatively for image fusion as well as compare it to some existing fusion methods. Experimental results indicate that the proposed method is effective and provides satisfactory fusion results.

A Study on the Optimal Distribution toss Management Using toss factor in Power Distribution Systems (분산형전원이 도입된 배전계통의 손실산정기법에 관한 연구)

  • Rho Dae-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.3
    • /
    • pp.231-240
    • /
    • 2005
  • Recently, the needs and concerns for the power loss are increasing according to the energy conservation at the level of the national policies and power utilities's business strategies. Especially, the issue of the power loss is the main factor for the determining the electric pricing rates in the circumstances of the deregulation of electrical industry. However, because of the lacking of management for power loss load factors (LLF) it is difficult to make a calculation for the power loss and to make a decision for the electric rates. And loss factor (k-factor) in korea, which is a most important factor for calculation of the distribution power loss, has been used as a fixed value of 0.32 since the fiscal year 1973, There(ore, this study presents the statistical calculation methods of the loss factors classified by load types and seasons by using the practical data of 65 primary feeders which are selected by proper procedures. Based on the above the algorithms and methods, the optimal method of the distribution loss management classified by facilities such as primary feeders, distribution transformers and secondary feeders is presented. The simulation results show the effectiveness and usefulness of the proposed methods.

  • PDF

Statistical Homogeneity Tests and Multiple Comparison Analysis for Response Characteristics between Treatments of Bridge Groups (교량 집단의 특성 수준간 통계적 응답 동질성 검정 및 다중 비교 분석)

  • Hwang, Jin-Ha;Kim, Ju-Han;An, Seoung-Su
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.18 no.4
    • /
    • pp.107-117
    • /
    • 2014
  • This study tests homogeneity and performs multiple comparison analysis among treatment levels of each factor group through t-test by materials and analysis of variance by structural type and service period. For that descriptive statistical analysis is performed for static and dynamic response characteristics and their ratios of calculated versus measured values based on a good many safety assessment reports for bridges. Homogeneity and post hoc test based on descriptive statistical analysis provide the measures for homogeneity identification among comparison groups in addition to the statistical reference values such as central tendency, variation and shape. This study is expected to be valuable for structural integrity assessment and design by comparing the measured and calculated values with the reference values for the homogeneous group identified, which can help the engineers review the adequacy of the values and put the group database to practical use.

Optimization of Silver Nanoparticles Synthesis through Design-of-Experiment Method (실험계획법을 활용한 은 나노 입자의 합성 및 최적화)

  • Lim, Jae Hong;Kang, Kyung Yeon;Im, Badro;Lee, Jae Sung
    • Korean Chemical Engineering Research
    • /
    • v.46 no.4
    • /
    • pp.756-763
    • /
    • 2008
  • The aim of this work was to obtain uniform and well-dispersed spherical silver nanoparticles using statistical design-of-experiment methods. We performed the experiments using 2 k fractional factorial designs with respect to key factors of a general chemical reduction method. The nanoparticles prepared were characterized by SEM, TEM and UV-visible absorbance for particle size, distribution, aggregation and anisotropy. The data obtained were analyzed and optimized using a statistical software, Minitab. The design-of-experiment methods using quantified data enabled us to determine key factors and appreciate interactions between factors. The measured properties of nanoparticles were dominated not only by individual one or two main factors but also by interactions between factors. The appropriate combination of the factors produced small, narrow-distributed and non-aggregated silver nanoparticles of about 30 nm with approximately 10% standard deviation.

CERES: A Log-based, Interactive Web Analytics System for Backbone Networks (CERES: 백본망 로그 기반 대화형 웹 분석 시스템)

  • Suh, Ilhyun;Chung, Yon Dohn
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.651-657
    • /
    • 2015
  • The amount of web traffic has increased as a result of the rapid growth of the use of web-based applications. In order to obtain valuable information from web logs, we need to develop systems that can support interactive, flexible, and efficient ways to analyze and handle large amounts of data. In this paper, we present CERES, a log-based, interactive web analytics system for backbone networks. Since CERES focuses on analyzing web log records generated from backbone networks, it is possible to perform a web analysis from the perspective of a network. CERES is designed for deployment in a server cluster using the Hadoop Distributed File System (HDFS) as the underlying storage. We transform and store web log records from backbone networks into relations and then allow users to use a SQL-like language to analyze web log records in a flexible and interactive manner. In particular, we use the data cube technique to enable the efficient statistical analysis of web log. The system provides users a web-based, multi-modal user interface.

A Statistical Observation on the Monthly Number of Marine Accidents in Korean waters (국내 해역의 월별 해양사고건수에 관한 통계적 고찰)

  • Kim, Jung-Hoon
    • Journal of Navigation and Port Research
    • /
    • v.32 no.10
    • /
    • pp.751-757
    • /
    • 2008
  • This paper made some statistical analyses on marine accidents in Korean waters using marine accident statistics. They were verified through analyses of one-way ANOVA: the difference in average number of marine accidents among Korean waters, between fishing and non-fishing vessels, and by year and by month The pairwise post hoc multiple comparisons of REGW-For GH test were additionally tested. As a result, there were significance differences among Korean waters. The difference of marine accidents by year was verified to have a statistical significance in the South and the East Sea for the fishing vessel, and in the South Sea for the non-fishing vessel. There were signifiant differences of monthly marine accidents by month in the Yellow and the East Sea for the fishing vessel only.

A Study on the Training Optimization Using Genetic Algorithm -In case of Statistical Classification considering Normal Distribution- (유전자 알고리즘을 이용한 트레이닝 최적화 기법 연구 - 정규분포를 고려한 통계적 영상분류의 경우 -)

  • 어양담;조봉환;이용웅;김용일
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.3
    • /
    • pp.195-208
    • /
    • 1999
  • In the classification of satellite images, the representative of training of classes is very important factor that affects the classification accuracy. Hence, in order to improve the classification accuracy, it is required to optimize pre-classification stage which determines classification parameters rather than to develop classifiers alone. In this study, the normality of training are calculated at the preclassification stage using SPOT XS and LANDSAT TM. A correlation coefficient of multivariate Q-Q plot with 5% significance level and a variance of initial training are considered as an object function of genetic algorithm in the training normalization process. As a result of normalization of training using the genetic algorithm, it was proved that, for the study area, the mean and variance of each class shifted to the population, and the result showed the possibility of prediction of the distribution of each class.

Improvement Particle and Physical Characteristics Applying of The Pretreatment Process System of Coal Gasification Slag and It's Verification Based on Statistical Approach (석탄 가스화 용융 슬래그의 전처리 공정 시스템 적용에 따른 입자 및 물리적 특성 개선 및 통계적 검증)

  • Kim, Jong;Han, Min-Cheol;Han, Jun-Hui
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.10 no.3
    • /
    • pp.285-292
    • /
    • 2022
  • The objective of this study is to investigate whether CGS generated in IGCC satisfies the fine aggregate quality items specified in KS F 2527(Concrete Aggregate) through the pretreatment process system and the quality improvement the system. The statistical significance of the pretreatment process was analyzed through Repeated Measurements ANOVA as measured values according to individually pretreatment process system. As a result of the analysis, In the case of CGS fine aggregate quality before and after the pretreatment process system, the density increased 5.2 %, the absorption rate decreased by 1.86 %, the 0.08 mm passing ratio decreased by 2.25 %, and Fineness Modulus and Particle-size Distribution were also found to be adjustable. It was found that the pretreatment process system was significant in improving the quality of CGS.

A Study on Clutter Rejection using PCA and Stochastic features of Edge Image (주성분 분석법 및 외곽선 영상의 통계적 특성을 이용한 클러터 제거기법 연구)

  • Kang, Suk-Jong;Kim, Do-Jong;Bae, Hyeon-Deok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.12-18
    • /
    • 2010
  • Automatic Target Detection (ATD) systems that use forward-looking infrared (FLIR) consists of three stages. preprocessing, detection, and clutter rejection. All potential targets are extracted in preprocessing and detection stages. But, this results in a high false alarm rates. To reduce false alarm rates of ATD system, true targets are extracted in the clutter rejection stage. This paper focuses on clutter rejection stage. This paper presents a new clutter rejection technique using PCA features and stochastic features of clutters and targets. PCA features are obtained from Euclidian distances using which potential targets are projected to reduced eigenspace selected from target eigenvectors. CV is used for calculating stochastic features of edges in targets and clutters images. To distinguish between target and clutter, LDA (Linear Discriminant Analysis) is applied. The experimental results show that the proposed algorithm accurately classify clutters with a low false rate compared to PCA method or CV method

Efficient strategy for the genetic analysis of related samples with a linear mixed model (선형혼합모형을 이용한 유전체 자료분석방안에 대한 연구)

  • Lim, Jeongmin;Sung, Joohon;Won, Sungho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1025-1038
    • /
    • 2014
  • Linear mixed model has often been utilized for genetic association analysis with family-based samples. The correlation matrix for family-based samples is constructed with kinship coefficient and assumes that parental phenotypes are independent and the amount of correlations between parent and offspring is same as that of correlations between siblings. However, for instance, there are positive correlations between parental heights, which indicates that the assumption for correlation matrix is often violated. The statistical validity and power are affected by the appropriateness of assumed variance covariance matrix, and in this thesis, we provide the linear mixed model with flexible variance covariance matrix. Our results show that the proposed method is usually more efficient than existing approaches, and its application to genome-wide association study of body mass index illustrates the practical value in real data analysis.