• Title/Summary/Keyword: 지수 함수적 가중

Search Result 29, Processing Time 0.025 seconds

A New Information Index of Axiomatic Design for Robustness (강건성을 고려한 공리적 설계의 새로운 정보 지수)

  • Hwang, Kwang-Hyeon;Park, Gyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.10
    • /
    • pp.2073-2081
    • /
    • 2002
  • In product design and manufacturing, axiomatic design provides a systematic approach for the decision-making process. Two axioms have been defined such as the Independence Axiom and the Information Axiom. The Information Axiom states that the best design among those that satisfy the independence axiom is the one with the least information content. In other words, the best design is the one that has the highest probability of success. On the other hand, the Taguchi robust design is used in the two-step process; one is "reduce variability," and the other is "adjust the mean on the target." The two-step can be interpreted as a problem that has two FRs (functional requirements). Therefore, the Taguchi method should be used based on the satisfaction of the Independence Axiom. Common aspects exist between the Taguchi method and Axiomatic Design in that a robust design is induced. However, different characteristics are found as well. The Taguchi method does not have the design range, and the probability of success may not be enough to express robustness. Our purpose is to find the one that has the highest probability of success and the smallest variation. A new index is proposed to satisfy these conditions. The index is defined by multiplication of the robustness weight function and the probability density function. The robustness weight function has the maximum at the target value and zero at the boundary of the design range. The validity of the index is proved through various examples.gh various examples.

국가간 기술혁신 파급경로에 관한 실증분석

  • 정동진;김한주;김상태;조상섭
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2004.11a
    • /
    • pp.101-113
    • /
    • 2004
  • 본 연구는 기술혁신파급경로를 결정하는 국가 간 무역역할에 대한 실증적인 분석을 목적으로 한다 이 연구목적을 위하여, 최근 자료인 1980년부터 2003년까지 15개 OECD국가를 대상으로 자국의 기술혁신을 결정하는 중요한 변수로 알려진 자국 R&D축적 및 무역대상국의 R&D축적자료를 구축하였으며, 이를 무역지수인 쌍방간에 수출 및 수입량을 경제규모로 나눈 가중지수를 이용하여 유입된 R&D축적량을 구축하였다. 또한 대상변수들의 기술혁신파급역할에 대하여 최근 논의되고 있는 비정상적 패널기법을 이용하여 분석하였다. 최근 제안되고 있는 비정상적 패널기법을 이용하여 국제 간에 기술혁신파급경로를 분석한 결과를 간단하게 요약하면, 다음과 같다. 첫째, 분석대상변수들은 비정상성을 갖는 것으로 나타났다. 둘째, 그러나 장기적으로 분석대상변수들이 서로 균형상태를 나타내는 공적분관계에 있음을 알 수 있었다. 셋째, 국가 간에 기술혁신파급경로의 방향과 정도를 파악하기 위하여 패널 공적분계수를 추정하였으나, 설정함수형태에 따라서 여러 가지 상반된 실증결과가 나타났다. 따라서 기존 연구Coe et al., 1995, Keller, 1998, Kao, et al., 1999 그리고 Funk, 2001]의 분석결과 및 그 시사점들이 서로 다른 이유는 분석대상변수들의 선택차이뿐만 아니라, 기술혁신경로에 대한 설정함수형태에 따라서 서로 다른 분석결과가 나타날 수 있는 가능성을 보여준다. 본 연구에서 나타난 분석결과의 시사점을 보면, 국가 간에 기술혁신파급경로분석은 기술혁신파급을 결정하는 매개변수선정도 중요하지만, 결정된 설명변수들 사이에 어떤 기술혁신파급에 관한 연관관계가 존재하는지에 대한 실증분석 즉 파급경로분석도 매우 중요함을 보여준다. 이러한 파급경로분석에는 기존의 선형가정뿐만 아니라 비선형가정을 이용한 기술파급경로분석을 통한 시사점제안이 요구된다.관적인 시스템을 제공하는 것이다.가 생성된다. $M_{C}$에 CaC $l_2$를 첨가한 경우 $M_{C}$는 완전히 $M_{Cl}$ 로 전이를 하였다. $M_{Cl}$ 에 CaC $l_2$를 첨가하였을 경우에는 아무런 수화물의 변화는 발생하지 않았다. 따라서 CaS $O_4$.2$H_2O$를 CaC $O_3$및 CaC $l_2$와 반응시켰을 때의 AFm상의 안정성 순서는 $M_{S}$ < $M_{C}$< $M_{Cl}$ 로 된다.phy. Finally, Regional Development and Regional Environmental Problems were highly correlated with accommodators.젼 공정을 거쳐 제조된다는 점을 고려할 때 이용가능한 에너지 함량계산에 직접 활용될 수는 없을 것이다.총단백질 및 AST에서 시간경과에 따른 삼투압 조절 능력에 문제가 있는 것으로 보여진다.c}C$에서 5시간 가열조리 후 잔존율은 각각 84.7% 및 73.3%였고, 질소가스 통기하에서는 잔존율이 88.9% 및 81.8%로 더욱 안정하였다.8% 및 12.44%, 201일 이상의 경우 13.17% 및 11.30%로 201일 이상의 유기의 경우에만 대조구와 삭제 구간에 유의적인(p<0.05) 차이를 나타내었다.는 담수(淡水)에서 10%o의 해수(海水)로 이주된지 14일(日) 이후에 신장(腎臟)에서 수축된 것으로 나타났다. 30%o의 해수(海水)에 적응(適應)된 틸라피아의 평균 신사구체(腎絲球體)의 면적은 담수(淡水)에 적응된 개체의 면적보다 유의성

  • PDF

Groundwater level behavior analysis using kernel density estimation (비모수 핵밀도 함수를 이용한 지하수위 거동분석)

  • Jeong, Ji Hye;Kim, Jong Wook;Lee, Jeong Ju;Chun, Gun Il
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.381-381
    • /
    • 2017
  • 수자원 분야에 대한 기후변화의 영향은 홍수, 가뭄 등 극치 수문사상의 증가와 변동성 확대를 초래하는 것으로 알려져 있으며, 이에 따라 예년에 비해 발생빈도 및 심도가 증가한 가뭄에 대한 모니터링 및 피해경감을 위해 정부에서는 국민안전처를 비롯한 관계기관 합동으로 생활 공업 농업용수 등 분야별 가뭄정보를 제공하고 있다. 국토교통부와 환경부는 생활 및 공업용수 분야의 가뭄정보 제공을 위해 광역 지방 상수도를 이용하는 급수 지역과 마을상수도, 소규모급수시설 등 미급수지역의 용수수급 정보를 분석하여 가뭄 분석정보를 제공 중에 있다. 하지만, 미급수지역에 대한 가뭄 예?경보는 기준이 되는 수원정보의 부재로 기상 가뭄지수인 SPI6를 이용하여 정보를 생산하고 있다. 기상학적 가뭄 상황과 물부족에 의한 체감 가뭄은 차이가 있으며, 미급수 지역의 경우 지하수를 주 수원으로 사용하는 지역이 대부분으로 기상학적 가뭄지수인 SPI6를 이용한 가뭄정보로 실제 물수급 상황을 반영하기는 부족한 실정이다. 따라서 본 연구에서는 미급수지역의 주요 수원인 지하수의 수위 상황을 반영한 가뭄모니터링 기법을 개발하고자 하였으며, 가용량 분석이 현실적으로 어려운 지하수의 특성을 고려하여 수위 거동의 통계적 분석을 통해 가뭄을 모니터링 할 수 있는 방법으로 접근하였다. 국가지하수관측소 중 관측기간이 10년 이상이고 강우와의 상관성이 높은 관측소들을 선정한 후, 일수위 관측자료를 월별로 분리하여 1월~12월 각 월에 대해 핵밀도 함수 추정기법(kernel densitiy estimation)을 적용하여 월별 지하수위 분포 특성을 도출하였다. 각 관측소별 관측수위 분포에 대해 백분위수(percentile)를 이용하여, 25%~100% 사이는 정상, 10%~25% 사이는 주의단계, 5%~10% 사이는 심한가뭄, 5% 이하는 매우심함으로 가뭄의 단계를 구분하였다. 각 백분위수에 해당하는 수위 값은 추정된 Kernel Density와 Quantile Function을 이용하여 산정하였고, 최근 10일 평균수위를 현재의 수위로 설정하여 가뭄의 정도를 분류하였다. 분석된 결과는 관측소를 기점으로 역거리가중법(inverse distance weighting)을 통해 공간 분포를 시켰으며, 수문학적, 지질학적 동질성을 반영하기 위하여 유역도 및 수문지질도를 중첩한 공간연산을 통해 전국 지하수 가뭄상태를 나타내는 지하수위 등급분포도를 작성하였다. 실제 가뭄상황과의 상관성을 분석하기 위해 언론기사를 통해 확인된 가뭄시기와 백문위수 25%이하로 분석된 지하수 가뭄시기를 ROC(receiver operation characteristics) 분석을 통해 비교 검증하였다.

  • PDF

Providing the combined models for groundwater changes using common indicators in GIS (GIS 공통 지표를 활용한 지하수 변화 통합 모델 제공)

  • Samaneh, Hamta;Seo, You Seok
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.3
    • /
    • pp.245-255
    • /
    • 2022
  • Evaluating the qualitative the qualitative process of water resources by using various indicators, as one of the most prevalent methods for optimal managing of water bodies, is necessary for having one regular plan for protection of water quality. In this study, zoning maps were developed on a yearly basis by collecting and reviewing the process, validating, and performing statistical tests on qualitative parameters҆ data of the Iranian aquifers from 1995 to 2020 using Geographic Information System (GIS), and based on Inverse Distance Weighting (IDW), Radial Basic Function (RBF), and Global Polynomial Interpolation (GPI) methods and Kriging and Co-Kriging techniques in three types including simple, ordinary, and universal. Then, minimum uncertainty and zoning error in addition to proximity for ASE and RMSE amount, was selected as the optimum model. Afterwards, the selected model was zoned by using Scholar and Wilcox. General evaluation of groundwater situation of Iran, revealed that 59.70 and 39.86% of the resources are classified into the class of unsuitable for agricultural and drinking purposes, respectively indicating the crisis of groundwater quality in Iran. Finally, for validating the extracted results, spatial changes in water quality were evaluated using the Groundwater Quality Index (GWQI), indicating high sensitivity of aquifers to small quantitative changes in water level in addition to severe shortage of groundwater reserves in Iran.

D.C. Motor Speed control Using Explicit M.R.A.C. Algorithms (Explicit M.R.A.C. 알고리즘을 이용한 직류 전동기 속도 제어)

  • Kim, Jong-Hwan;Park, Jun-Ryeol;Choe, Gye-Geun
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.6
    • /
    • pp.11-17
    • /
    • 1983
  • In this paper, the application of the explicit M.R.A.C. algorithms to the D.C. motor speed control using the microprocessor is studied. The adaptation algorithms are derived from the gradient method and the exponentially weighted least square [E.W.L.S.] method. In order to minimize the computational instability of the E.W.L.S. method, the adaptation algorithm of UDUt factorization method is developed, and because of the characteristics of the D.C. motor (dead-aone phenomenon) , the SM. gra-dient type algorithm is also improved from the gradient type algorithm. Computer simulations and experiments show that these algorithms adapt well to the rapid change of the reference input and the load.

  • PDF

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

Comparison of Groundwater Recharge between HELP Model and SWAT Model (HELP 모형과 SWAT 모형의 지하수 함양량 비교)

  • Lee, Do-Hun;Kim, Nam-Won;Chung, Il-Moon
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.4
    • /
    • pp.383-391
    • /
    • 2010
  • The groundwater recharge was assessed by using both SWAT and HELP models in Bocheong-cheon watershed. The SWAT model is a comprehensive surface and subsurface model, but it lacks the physical basis for simulating a soil water percolation process. The HELP model which has a drawback in simulating subsurface lateral flow and groundwater flow component can simulate soil water percolation process by considering the unsaturated flow effect of soil layers. The SWAT model has been successfully applied for estimating groundwater recharge in a number of watersheds in Korea, while the application of HELP model has been very limited. The subsurface lateral flow parameter was proposed in order to consider the subsurface lateral flow effect in HELP model and the groundwater recharge was simulated by the modified exponential decay weighting function in HELP model. The simulation results indicate that the recharge of HELP model significantly depends on the values of lateral flow parameter. The recharge errors between SWAT and HELP are the smallest when the lateral flow parameter is about 0.6 and the recharge rates between two models are shown to be reasonably comparable for daily, monthly, and yearly time scales. The HELP model is useful for estimating groundwater recharge at watershed scale because the model structure and input parameters of HELP model are simpler than that of SWAT model. The accuracy of assessing the groundwater recharge might be improved by the concurrent application of SWAT model and HELP model.

Estimation of Groundwater Recharge by Considering Runoff Process and Groundwater Level Variation in Watershed (유역 유출과정과 지하수위 변동을 고려한 분포형 지하수 함양량 산정방안)

  • Chung, Il-Moon;Kim, Nam-Won;Lee, Jeong-Woo
    • Journal of Soil and Groundwater Environment
    • /
    • v.12 no.5
    • /
    • pp.19-32
    • /
    • 2007
  • In Korea, there have been various methods of estimating groundwater recharge which generally can be subdivided into three types: baseflow separation method by means of groundwater recession curve, water budget analysis based on lumped conceptual model in watershed, and water table fluctuation method (WTF) by using the data from groundwater monitoring wells. However, groundwater recharge rate shows the spatial-temporal variability due to climatic condition, land use and hydrogeological heterogeneity, so these methods have various limits to deal with these characteristics. To overcome these limitations, we present a new method of estimating recharge based on water balance components from the SWAT-MODFLOW which is an integrated surface-ground water model. Groundwater levels in the interest area close to the stream have dynamics similar to stream flow, whereas levels further upslope respond to precipitation with a delay. As these behaviours are related to the physical process of recharge, it is needed to account for the time delay in aquifer recharge once the water exits the soil profile to represent these features. In SWAT, a single linear reservoir storage module with an exponential decay weighting function is used to compute the recharge from soil to aquifer on a given day. However, this module has some limitations expressing recharge variation when the delay time is too long and transient recharge trend does not match to the groundwater table time series, the multi-reservoir storage routing module which represents more realistic time delay through vadose zone is newly suggested in this study. In this module, the parameter related to the delay time should be optimized by checking the correlation between simulated recharge and observed groundwater levels. The final step of this procedure is to compare simulated groundwater table with observed one as well as to compare simulated watershed runoff with observed one. This method is applied to Mihocheon watershed in Korea for the purpose of testing the procedure of proper estimation of spatio-temporal groundwater recharge distribution. As the newly suggested method of estimating recharge has the advantages of effectiveness of watershed model as well as the accuracy of WTF method, the estimated daily recharge rate would be an advanced quantity reflecting the heterogeneity of hydrogeology, climatic condition, land use as well as physical behaviour of water in soil layers and aquifers.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.