• Title/Summary/Keyword: weighting variables

Search Result 149, Processing Time 0.022 seconds

Shape optimization of angled ribs to enhance cooling efficiency (냉각효율 향상을 위한 경사진 리브의 형상최적설계)

  • Kim, Hong-Min;Kim, Kwang-Yong
    • 유체기계공업학회:학술대회논문집
    • /
    • 2003.12a
    • /
    • pp.627-630
    • /
    • 2003
  • This work presents a numerical procedure to optimize the shape of three-dimensional channel with angled ribs mounted on one of the walls to enhance turbulent heat transfer. The response surface method is used as an optimization technique with Reynolds-averaged Navier-Stokes analysis of flow and heat transfer. SST turbulence model is used as a turbulence closure. The width-to-height ratio of the rib, rib height-to-channel height ratio, pitch-to-rib height ratio and attack angle of the rib are chosen as design variables. The objective function is defined as a linear combination of heat-transfer and friction-loss related terms with weighting factor. D-optimal experimental design method is used to determine the data points. Optimum shapes of the channel have been obtained for the weighting factors in the range from 0.0 to 1.0.

  • PDF

Representative of Sample and Efficiency of Estimation (표본의 대표성과 추정의 효율성)

  • Kim, Kyu-Seong
    • Survey Research
    • /
    • v.6 no.1
    • /
    • pp.39-62
    • /
    • 2005
  • In this paper we investigate some concepts frequently called in sample surveys such as 'representative of sample' as well as 'consistency', 'unbiasedness', and 'efficiency' in estimation. The first is strongly related with sampling procedure including coverage rate of survey population, response rate in establishment survey, and recruit rate of final samples. The others, however, are concerned with both sampling design and corresponding estimators simultaneously. Whereas both consistency and unbiasedness are based on the representative sample, efficiency does not depend on the representative sample. The representative of sample can be increased by raising the rate of coverage, response and recruit as well. Consistency may be investigated according to variables of interest and auxiliary variables. The well-known raing-ratio weighting method is a method to increase consistency of auxiliary variables by means of matching population size in each cell. Efficiency is not directly related with the representative of sample, and allocation methods such as proportional and Neyman allocation in stratified sampling and post-stratification are all methods to increase the efficiency of estimation under the condition of satisfying the representative of sample.

  • PDF

A Study on the Influence of Korea Internet Shopping Mall's Customer Satisfaction Factor to Chinese Internet Shoppers (한국 인터넷 쇼핑몰의 고객만족요인이 중국 고객에 미치는 영향에 관한 연구)

  • Cui Ran Hong;Ma Heng Guo;Kim Chang-Eun
    • Journal of the Korea Safety Management & Science
    • /
    • v.8 no.5
    • /
    • pp.193-209
    • /
    • 2006
  • Multiple regression is used to examine the relationship between a set of two or more independent variables and one dependent variable. It provides the information necessary to make predictions of the dependent variables based on several independent variables. To do so, the multiple regression equation is extended to: y=$a+{\beta}_1x_1+{\beta}_2x_2+{\ldots}+{\beta}_kx_k$ y=attractiveness a=the value of the intercept ${\beta}_1$=the slope(weighting) of the first variable ${\beta}_1$= the slope(weighting) of the second variable ${\beta}_k$=the slope of the $\kappa$th variable The resulting regression equation of this research is y=$a+{\beta}_1site's\;system+{\beta}_2customer\;satisfaction+{\beta}_3products+{\beta}_4delivery$ y=3.233+0.374(site's system)+0.268(customer satisfaction)+0.17(products)+0.077(delivery)

Integrated calibration weighting using complex auxiliary information (통합 칼리브레이션 가중치 산출 비교연구)

  • Park, Inho;Kim, Sujin
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.427-438
    • /
    • 2021
  • Two-stage sampling allows us to estimate population characteristics by both unit and cluster level together. Given a complex auxiliary information, integrated calibration weighting would better reflect the level-wise characteristics as well as multivariate characteristics between levels. This paper explored the integrated calibration weighting methods by Estevao and Särndal (2006) and Kim (2019) through a simulation study, where the efficiency of those weighting methods was compared using an artificial population data. Two weighting methods among others are shown efficient: single step calibration at the unit level with stacked individualized auxiliary information and iterative integrated calibration at each level. Under both methods, cluster calibrated weights are defined as the average of the calibrated weights of the unit(s) within cluster. Both were very good in terms of the goodness-of-fit of estimating the population totals of mutual auxiliary information between clusters and units, and showed small relative bias and relative mean square root errors for estimating the population totals of survey variables that are not included in calibration adjustments.

Optimal Control of Nuclear Reactors by Digital Computer (전자계산기에 의한 원자로최적제어)

  • 천희영;박귀태
    • 전기의세계
    • /
    • v.26 no.6
    • /
    • pp.66-71
    • /
    • 1977
  • In this paper a method is presented for the optimal control of a nuclear reactor at equilibrium state by use of a digital computer. Using the optimal control theory, we formulate the control problem of the reactor as a discrete-time linear regulator problem. A quadratic performance index is defined. The effects of choosing different performance index weighting matrices to the feedback gain matrix and reactor transient responses are studied for the deterministic optimal control with all state variables accessible to measurement.

  • PDF

TRAFFIC-FLOW-PREDICTION SYSTEMS BASED ON UPSTREAM TRAFFIC (교통량예측모형의 개발과 평가)

  • 김창균
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.84-98
    • /
    • 1995
  • Network-based model were developed to predict short term future traffic volume based on current traffic, historical average, and upstream traffic. It is presumed that upstream traffic volume can be used to predict the downstream traffic in a specific time period. Three models were developed for traffic flow prediction; a combination of historical average and upstream traffic, a combination of current traffic and upstream traffic, and a combination of all three variables. The three models were evaluated using regression analysis. The third model is found to provide the best prediction for the analyzed data. In order to balance the variables appropriately according to the present traffic condition, a heuristic adaptive weighting system is devised based on the relationships between the beginning period of prediction and the previous periods. The developed models were applied to 15-minute freeway data obtained by regular induction loop detectors. The prediction models were shown to be capable of producing reliable and accurate forecasts under congested traffic condition. The prediction systems perform better in the 15-minute range than in the ranges of 30-to 45-minute. It is also found that the combined models usually produce more consistent forecasts than the historical average.

  • PDF

A Review on the Results of Adjusting Weight in Vulnerability Analysis of Climate Change Driven Disaster - Focused on Sea-level Rise - (도시 기후변화 재해취약성 분석방법의 가중치 조정에 따른 결과 비교 검토 - 해수면 상승 재해를 중심으로 -)

  • Kim, Jisook;Kim, Hoyong
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.3
    • /
    • pp.171-180
    • /
    • 2017
  • The vulnerability analysis of climate change driven disaster has been used as institutional framework for the urban policies of disaster prevention since 2012. However, some problems have occurred due to the structure of vulnerability analysis, such as overweighted variables and duplicated application of variables of similar meaning. The goal of this study is to examine the differences of results between the method of current guideline and the method of weight equalization. For this, we examines the current structural framework of the vulnerability analysis, and performs empirical analysis. As a result, the extent and magnitude of vulnerability showed different spatial patterns depending on the weighting methods. Standardized weighting method relatively represented wider vulnerable areas compared to the pre-existing method which follows the current instruction manual. To apply the results of vulnerability analysis to urban planning process for disaster prevention, this study suggests that the reliability of the results should be ensured by improving analytical framework and detailed review of the results.

Application of a Statistical Interpolation Method to Correct Extreme Values in High-Resolution Gridded Climate Variables (고해상도 격자 기후자료 내 이상 기후변수 수정을 위한 통계적 보간법 적용)

  • Jeong, Yeo min;Eum, Hyung-Il
    • Journal of Climate Change Research
    • /
    • v.6 no.4
    • /
    • pp.331-344
    • /
    • 2015
  • A long-term gridded historical data at 3 km spatial resolution has been generated for practical regional applications such as hydrologic modelling. However, overly high or low values have been found at some grid points where complex topography or sparse observational network exist. In this study, the Inverse Distance Weighting (IDW) method was applied to properly smooth the overly predicted values of Improved GIS-based Regression Model (IGISRM), called the IDW-IGISRM grid data, at the same resolution for daily precipitation, maximum temperature and minimum temperature from 2001 to 2010 over South Korea. We tested various effective distances in the IDW method to detect an optimal distance that provides the highest performance. IDW-IGISRM was compared with IGISRM to evaluate the effectiveness of IDW-IGISRM with regard to spatial patterns, and quantitative performance metrics over 243 AWS observational points and four selected stations showing the largest biases. Regarding the spatial pattern, IDW-IGISRM reduced irrational overly predicted values, i. e. producing smoother spatial maps that IGISRM for all variables. In addition, all quantitative performance metrics were improved by IDW-IGISRM; correlation coefficient (CC), Index Of Agreement (IOA) increase up to 11.2% and 2.0%, respectively. Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were also reduced up to 5.4% and 15.2% respectively. At the selected four stations, this study demonstrated that the improvement was more considerable. These results indicate that IDW-IGISRM can improve the predictive performance of IGISRM, consequently providing more reliable high-resolution gridded data for assessment, adaptation, and vulnerability studies of climate change impacts.

Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

  • Kassim, Moath;Heo, Gyunyoung
    • Nuclear Engineering and Technology
    • /
    • v.50 no.4
    • /
    • pp.589-598
    • /
    • 2018
  • To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA). PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C), PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC), to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor ($W^a$) with a weighting factor based on the Euclidean distance ($W^d$), and the third approach proposes applying $W^d$, TC, and C, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1) identify and isolate a drifted sensor that should undergo calibration, (2) identify a faulty sensor/s due to long and continuous missing data range, and (3) identify a healthy sensor.

Forming Weighting Adjustment Cells for Unit-Nonresponse in Sample Surveys (표본조사에서 무응답 가중치 조정층 구성방법에 따른 효과)

  • Kim, Young-Won;Nam, Si-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.103-113
    • /
    • 2009
  • Weighting is a common form of unit nonresponse adjustment in sample surveys where entire questionnaires are missing due to noncontact or refusal to participate. A common approach computes the response weight as the inverse of the response rate within adjustment cells based on covariate information. In this paper, we consider the efficiency and robustness of nonresponse weight adjustment bated on the response propensity and predictive mean. In the simulation study based on 2000 Fishry Census in Korea, the root mean squared errors for assessing the various ways of forming nonresponse adjustment cell s are investigated. The simulation result suggest that the most important feature of variables for inclusion in weighting adjustment is that they are predictive of survey outcomes. Though useful, prediction of the propensity to response is a secondary. Also the result suggest that adjustment cells based on joint classification by the response propensity and predictor of the outcomes is productive.