• Title/Summary/Keyword: LHS

Search Result 72, Processing Time 0.026 seconds

Latin Hypercube Sampling Based Probabilistic Small Signal Stability Analysis Considering Load Correlation

  • Zuo, Jian;Li, Yinhong;Cai, Defu;Shi, Dongyuan
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.1832-1842
    • /
    • 2014
  • A novel probabilistic small signal stability analysis (PSSSA) method considering load correlation is proposed in this paper. The superiority Latin hypercube sampling (LHS) technique combined with Monte Carlo simulation (MCS) is utilized to investigate the probabilistic small signal stability of power system in presence of load correlation. LHS helps to reduce the sampling size, meanwhile guarantees the accuracy and robustness of the solutions. The correlation coefficient matrix is adopted to represent the correlations between loads. Simulation results of the two-area, four-machine system prove that the proposed method is an efficient and robust sampling method. Simulation results of the 16-machine, 68-bus test system indicate that load correlation has a significant impact on the probabilistic analysis result of the critical oscillation mode under a certain degree of load uncertainty.

Choosing an optimal connecting place of a nuclear power plant to a power system using Monte Carlo and LHS methods

  • Kiomarsi, Farshid;Shojaei, Ali Asghar;Soltani, Sepehr
    • Nuclear Engineering and Technology
    • /
    • v.52 no.7
    • /
    • pp.1587-1596
    • /
    • 2020
  • The location selection for nuclear power plants (NPP) is a strategic decision, which has significant impact operation of the plant and sustainable development of the region. Further, the ranking of the alternative locations and selection of the most suitable and efficient locations for NPPs is an important multi-criteria decision-making problem. In this paper, the non-sequential Monte Carlo probabilistic method and the Latin hypercube sampling probabilistic method are used to evaluate and select the optimal locations for NPP. These locations are identified by the power plant's onsite loads and the average of the lowest number of relay protection after the NPP's trip, based on electricity considerations. The results obtained from the proposed method indicate that in selecting the optimal location for an NPP after a power plant trip with the purpose of internal onsite loads of the power plant and the average of the lowest number of relay protection power system, on the IEEE RTS 24-bus system network given. This paper provides an effective and systematic study of the decision-making process for evaluating and selecting optimal locations for an NPP.

A Study on the Optimization Strategy using Permanent Magnet Pole Shape Optimization of a Large Scale BLDC Motor (대용량 BLDC 전동기의 영구자석 형상 최적화를 통한 최적화 기법 연구)

  • Woo, Sung-Hyun;Shin, Pan-Seok;Oh, Jin-Seok;Kong, Yeong-Kyung;Bin, Jae-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.5
    • /
    • pp.897-903
    • /
    • 2010
  • This paper presents a response surface method(RSM) with Latin Hypercube Sampling strategy, which is employed to optimize a magnet pole shape of large scale BLDC motor to minimize the cogging torque. The proposed LHS algorithm consists of the multi-objective Pareto optimization and (1+1) evolution strategy. The algorithm is compared with the uniform sampling point method in view points of computing time and convergence. In order to verify the developed algorithm, a 6 MW BLDC motor is simulated with 4 design parameters (arc length and 3 variables for magnet) and 4 constraints for minimizing of the cogging torque. The optimization procedure has two stages; the fist is to optimize the arc length of the PM and the second is to optimize the magnet pole shape by using the proposed hybrid algorithm. At the 3rd iteration, an optimal point is obtained, and the cogging torque of the optimized shape is converged to about 14% of the initial one. It means that 3 iterations aregood enough to obtain the optimal design parameters in the program.

Development of Stochastic Finite Element Model for Underground Structure with Discontinuous Rock Mass Using Latin Hypercube Sampling Technique (LHS기법을 이용한 불연속암반구조물의 확률유한요소해석기법개발)

  • 최규섭;정영수
    • Computational Structural Engineering
    • /
    • v.10 no.4
    • /
    • pp.143-154
    • /
    • 1997
  • Astochastic finite element model which reflects both the effect of discontinuities and the uncertainty of material properties in underground rock mass has been developed. Latin Hypercube Sampling technique has been mobilized and compared with the Monte Carlo simulation method. To consider the effect of discontinuities, the joint finite element model, which is known to be suitable to explain faults, cleavage, things of that nature, has been used in this study. To reflect the uncertainty of material properties, multi-random variables are assumed as the joint normal stiffness and the joint shear stiffness, which could be simulated in terms of normal distribution. The developed computer program in this study has been verified by practical example and has been applied to analyze the circular cavern with discontinuous rock mass.

  • PDF

Infiltration in Residential Buildings under Uncertainty (공동주택 침기의 불확실성 분석)

  • Hyun, Se-Hoon;Park, Cheol-Soo;Moon, Hyeun-Jun
    • Proceedings of the SAREK Conference
    • /
    • 2006.06a
    • /
    • pp.369-374
    • /
    • 2006
  • Quantification of infiltration rate is an important issue in HVAC system design. The infiltration in buildings depends on many uncertain parameters that vary with significant magnitude and hence, the results from standard deterministic simulation approach can be unreliable. The authors utilize uncertainty analysis In predicting the airflow rates. The paper presents relevant uncertain parameters such as meteorological data, building parameters (leakage areas of windows, doors, etc.), etc. Uncertainties of the aforementioned parameters are quantified based on available data from literature. Then, the Latin Hypercube Sampling (LHS) method was used for the uncertainty propagation. The LHS is one of the Monte Carlo simulation techniques that is suited for our needs. The CONTAMW was chosen to simulate infiltration phenomena in a residential apartment that is typical of residential buildings in Korea. It will be shown that the uncertainty propagating through this process is not negligible and may significantly influence the prediction of the airflow rates.

  • PDF

Multi-objective optimization application for a coupled light water small modular reactor-combined heat and power cycle (cogeneration) systems

  • Seong Woo Kang;Man-Sung Yim
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1654-1666
    • /
    • 2024
  • The goal of this research is to propose a way to maximize small modular reactor (SMR) utilization to gain better market feasibility in support of carbon neutrality. For that purpose, a comprehensive tool was developed, combining off-design thermohydraulic models, economic objective models (levelized cost of electricity, annual profit), non-economic models (saved CO2), a parameter input sampling method (Latin hypercube sampling, LHS), and a multi-objective evolutionary algorithm (Non-dominated Sorting Algorithm-2, NSGA2 method) for optimizing a SMR-combined heat and power cycle (CHP) system design. Considering multiple objectives, it was shown that NSGA2+LHS method can find better optimal solution sets with similar computational costs compared to a conventional weighted sum (WS) method. Out of multiple multi-objective optimal design configurations for a 105 MWe design generation rating, a chosen reference SMR-CHP system resulted in its levelized cost of electricity (LCOE) below $60/MWh for various heat prices, showing economic competitiveness for energy market conditions similar to South Korea. Examined economic feasibility may vary significantly based on CHP heat prices, and extensive consideration of the regional heat market may be required for SMR-CHP regional optimization. Nonetheless, with reasonable heat market prices (e.g. district heating prices comparable to those in Europe and Korea), SMR can still become highly competitive in the energy market if coupled with a CHP system.

INFRARED PHOTOMETRIC STUDY OF FIELD POPULATION II STARS

  • LEE SANG-GAK;BRUCE W. CARNEY;ROBERT PROBST
    • Journal of The Korean Astronomical Society
    • /
    • v.30 no.1
    • /
    • pp.1-11
    • /
    • 1997
  • Near infrared JHK magnitudes are presented for two hundred two high proper motion stars. We have observed high proper motion stars in the near-infrared bands(JHK) using the COB detector on the Kitt Peak 1.3m, 2.1m and 4m telescopes. The observations and data reduction procedures are described. The infrared color magnitude diagram and color-color diagrams for the program stars are presented.

  • PDF

Effects of Latin hypercube sampling on surrogate modeling and optimization

  • Afzal, Arshad;Kim, Kwang-Yong;Seo, Jae-won
    • International Journal of Fluid Machinery and Systems
    • /
    • v.10 no.3
    • /
    • pp.240-253
    • /
    • 2017
  • Latin hypercube sampling is widely used design-of-experiment technique to select design points for simulation which are then used to construct a surrogate model. The exploration/exploitation properties of surrogate models depend on the size and distribution of design points in the chosen design space. The present study aimed at evaluating the performance characteristics of various surrogate models depending on the Latin hypercube sampling (LHS) procedure (sample size and spatial distribution) for a diverse set of optimization problems. The analysis was carried out for two types of problems: (1) thermal-fluid design problems (optimizations of convergent-divergent micromixer coupled with pulsatile flow and boot-shaped ribs), and (2) analytical test functions (six-hump camel back, Branin-Hoo, Hartman 3, and Hartman 6 functions). The three surrogate models, namely, response surface approximation, Kriging, and radial basis neural networks were tested. The important findings are illustrated using Box-plots. The surrogate models were analyzed in terms of global exploration (accuracy over the domain space) and local exploitation (ease of finding the global optimum point). Radial basis neural networks showed the best overall performance in global exploration characteristics as well as tendency to find the approximate optimal solution for the majority of tested problems. To build a surrogate model, it is recommended to use an initial sample size equal to 15 times the number of design variables. The study will provide useful guidelines on the effect of initial sample size and distribution on surrogate construction and subsequent optimization using LHS sampling plan.

THE BRIGHT PART OF THE LUMINOSITY FUNCTION FOR HALO STARS

  • Lee, Sang-Gak
    • Journal of The Korean Astronomical Society
    • /
    • v.28 no.2
    • /
    • pp.139-146
    • /
    • 1995
  • The bright part of the halo luminosity function is derived from a sample of the 233 NLTT propermotion stars, which are selected by the 220 km/ see of cutoff velocity in transverse to rid the contamination by the disk stars and corrected for the stars omitted in the sample by the selection criterion. It is limited to the absolute magnitude range of $M_v=4-8$, but is based on the largest sample of halo stars up to now. This luminosity function provides a number density of $2.3{\cdot}10^{-5}pc^{-3}$ and a mass density of $2.3{\cdot}10^{-5}M_{o}pc^{-3}$ for 4 < $M_v$ < 8 in the solar neighborhood. These are not sufficient for disk stability. The kinematics of the sample stars are < U > = - 7 km/sec, < V > = - 228 km/sec, and < W > = -8 km/sec with (${\sigma_u},{\sigma_v},{\sigma_w}$) = (192, 84, 94) km/sec. The average metallicity of them is [Fe/H] = $- 1.7{\pm}0.8$. These are typical values for halo stars which are selected by the high cutoff velocity. We reanalyze the luminosity function for a sample of 57 LHS proper-motion stars. The newly derived luminosity function is consistent with the one derived from the NLTT halo stars, but gives a somewhat smaller number density for the absolute magnitude range covered by the LF from NLTT stars. The luminosity function based on the LHS stars seems to have a dip in the magnitude range corresponding to the Wielen Dip, but it also seems to have some fluctuations due to a small number of sample stars.

  • PDF

Tolerance Analysis and Optimization for a Lens System of a Mobile Phone Camera (휴대폰용 카메라 렌즈 시스템의 공차최적설계)

  • Jung, Sang-Jin;Choi, Dong-Hoon;Choi, Byung-Lyul;Kim, Ju-Ho
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.6
    • /
    • pp.397-406
    • /
    • 2011
  • Since tolerance allocation in a mobile phone camera manufacturing process greatly affects production cost and reliability of optical performance, a systematic design methodology for allocating optimal tolerances is required. In this study, we proposed the tolerance optimization procedure for determining tolerances that minimize production cost while satisfying the reliability constraints on important optical performance indices. We employed Latin hypercube sampling for evaluating the reliabilities of optical performance and a function-based sequential approximate optimization technique that can reduce computational burden and well handle numerical noise in the tolerance optimization process. Using the suggested tolerance optimization approach, the optimal production cost was decreased by 30.3 % compared to the initial cost while satisfying the two constraints on the reliabilities of optical performance.