• Title/Summary/Keyword: utility theory

Search Result 313, Processing Time 0.034 seconds

A Proposal for Simplified Velocity Estimation for Practical Applicability (실무 적용성이 용이한 간편 유속 산정식 제안)

  • Tai-Ho Choo;Jong-Cheol Seo; Hyeon-Gu Choi;Kun-Hak Chun
    • Journal of Wetlands Research
    • /
    • v.25 no.2
    • /
    • pp.75-82
    • /
    • 2023
  • Data for measuring the flow rate of streams are used as important basic data for the development and maintenance of water resources, and many experts are conducting research to make more accurate measurements. Especially, in Korea, monsoon rains and heavy rains are concentrated in summer due to the nature of the climate, so floods occur frequently. Therefore, it is necessary to measure the flow rate most accurately during a flood to predict and prevent flooding. Thus, the U.S. Geological Survey (USGS) introduces 1, 2, 3 point method using a flow meter as one way to measure the average flow rate. However, it is difficult to calculate the average flow rate with the existing 1, 2, 3 point method alone.This paper proposes a new 1, 2, 3 point method formula, which is more accurate, utilizing one probabilistic entropy concept. This is considered to be a highly empirical study that can supplement the limitations of existing measurement methods. Data and Flume data were used in the number of holesman to demonstrate the utility of the proposed formula. As a result of the analysis, in the case of Flume Data, the existing USGS 1 point method compared to the measured value was 7.6% on average, 8.6% on the 2 point method, and 8.1% on the 3 point method. In the case of Coleman Data, the 1 point method showed an average error rate of 5%, the 2 point method 5.6% and the 3 point method 5.3%. On the other hand, the proposed formula using the concept of entropy reduced the error rate by about 60% compared to the existing method, with the Flume Data averaging 4.7% for the 1 point method, 5.7% for the 2 point method, and 5.2% for the 3 point method. In addition, Coleman Data showed an average error of 2.5% in the 1 point method, 3.1% in the 2 point method, and 2.8% in the 3 point method, reducing the error rate by about 50% compared to the existing method.This study can calculate the average flow rate more accurately than the existing 1, 2, 3 point method, which can be useful in many ways, including future river disaster management, design and administration.

Discount Presentation Framing & Bundle Evaluation: The Effects of Consumption Benefit and Perceived Uncertainty of Quality (묶음제품 가격 할인 제시 프레이밍 효과: 지각된 소비 혜택과 품질 불확실성의 영향을 중심으로)

  • Im, Meeja
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.53-81
    • /
    • 2012
  • Constructing attractive bundle offers depends on more than an understanding of the distribution of consumer preferences. Consumers are also sensitive to the framing of price information in a bundle offer. In classical economic theory, consumers' utility should not change as long as the total price paid stays same. However, even when total prices are identical, consumers' preferences toward a bundle product could be different depending on the format of price presentation and the locus of price discount. A weighted additive model predicts that the impact of a price discount on the overall evaluation of the bundle will be greater when the discount is assigned to the more important product in the bundle(Yadav 1995). Meanwhile, a reference dependent model asserts that it is better to assign a price discount to a tie-in component that has a negative valuation at its current offer price than to a focal product that has a positive valuation at its current offer price(Janiszewski and Cunha 2004). This paper has expanded previous research regarding price discount presentation format, investigating the reasons for mixed results of prior research and presenting new mechanisms for price discount framing effect. Prior research has hypothesized that bundling is used to sell a tie-in component with an offer price above the consumer's reference price plus a focal product of the same offer price with reference price(e.g., Janiszewski and Cunha 2004). However, this study suggests that bundling strategy can be used for increasing product's attractiveness through the synergy between components even when offer prices of bundle components are the same with reference prices. In this context, this study employed various realistic bundle sets with same price between offer price and reference price in the experiment. Hamilton and Srivastava(2008) demonstrated that when evaluating different partitions of the same total price, consumers prefer partitions in which the price of the high-benefit component is higher. This study determined that their mechanism can be applied to price discount presentation formats. This study hypothesized that price discount framing effect depends not on the negative perception of tie-in component with offer price above reference price but rather on the consumers' perceived consumption benefit in bundle product. This research also hypothesized that preference for low-benefit discount mechanism is that perceived consumption benefit reduces price sensitivity. Furthermore, this study investigated how consumers' concern for quality in a price discount--a factor not considered in previous research--influences price discount framing. Yadav(1995)'s experiment used only one magazine bundle of relatively low quality uncertainty and could not show the influence of perceived uncertainty of quality. This study assumed that as perceived uncertainty of quality increases, the price sensitivity mechanism for assigning the discount to low-benefit will increase. Further, this research investigated the moderating effect of uncertainty of quality in price discount framing. The results of the experiment showed that when evaluating different partitions of the same total price and the same amount of discounts, the partition that discounts in the price of low benefit component is preferred to the partition that decreases the price of high benefit component. This implies that price discount framing effect depends on the perceived consumption benefit. The results also demonstrated that consumers are more price sensitive to low benefit component and less price sensitive to high benefit component. Furthermore, the results showed that the influence of price discount presentation format on the evaluation of bundle product varies with the perceived uncertainty of quality in high consumption benefit. As perceived uncertainty of quality gradually increases, the preference for discounts in the price of low consumption benefit decreases. Besides, the results demonstrate that as perceived uncertainty of quality gradually increases, the effect of price sensitivity in consumption benefit also increases. This paper integrated prior research by using a new mechanism of perceived consumption benefit and moderating effect of perceived quality uncertainty, thus providing a clearer explanation for price discount framing effect.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.