• Title/Summary/Keyword: traditional metrics

Search Result 81, Processing Time 0.035 seconds

Deriving Robust Reservoir Operation Policy under Changing Climate: Use of Robust Optimiziation with Stochastic Dynamic Programming

  • Kim, Gi Joo;Kim, Young-Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.171-171
    • /
    • 2020
  • Decision making strategies should consider both adaptiveness and robustness in order to deal with two main characteristics of climate change: non-stationarity and deep uncertainty. Especially, robust strategies are different from traditional optimal strategies in the sense that they are satisfactory over a wider range of uncertainty and may act as a key when confronting climate change. In this study, a new framework named Robust Stochastic Dynamic Programming (R-SDP) is proposed, which couples previously developed robust optimization (RO) into the objective function and constraint of SDP. Two main approaches of RO, feasibility robustness and solution robustness, are considered in the optimization algorithm and consequently, three models to be tested are developed: conventional-SDP (CSDP), R-SDP-Feasibility (RSDP-F), and R-SDP-Solution (RSDP-S). The developed models were used to derive optimal monthly release rules in a single reservoir, and multiple simulations of the derived monthly policy under inflow scenarios with varying mean and standard deviations are undergone. Simulation results were then evaluated with a wide range of evaluation metrics from reliability, resiliency, vulnerability to additional robustness measures. Evaluation results were finally visualized with advanced visualization tools that are used in multi-objective robust decision making (MORDM) framework. As a result, RSDP-F and RSDP-S models yielded more risk averse, or conservative, results than the CSDP model, and a trade-off relationship between traditional and robustness metrics was discovered.

  • PDF

Improving Performance of Jaccard Coefficient for Collaborative Filtering

  • Lee, Soojung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.11
    • /
    • pp.121-126
    • /
    • 2016
  • In recommender systems based on collaborative filtering, measuring similarity is very critical for determining the range of recommenders. Data sparsity problem is fundamental in collaborative filtering systems, which is partly solved by Jaccard coefficient combined with traditional similarity measures. This study proposes a new coefficient for improving performance of Jaccard coefficient by compensating for its drawbacks. We conducted experiments using datasets of various characteristics for performance analysis. As a result of comparison between the proposed and the similarity metric of Pearson correlation widely used up to date, it is found that the two metrics yielded competitive performance on a dense dataset while the proposed showed much better performance on a sparser dataset. Also, the result of comparing the proposed with Jaccard coefficient showed that the proposed yielded far better performance as the dataset is denser. Overall, the proposed coefficient demonstrated the best prediction and recommendation performance among the experimented metrics.

A Novel Journal Evaluation Metric that Adjusts the Impact Factors across Different Subject Categories

  • Pyo, Sujin;Lee, Woojin;Lee, Jaewook
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.1
    • /
    • pp.99-109
    • /
    • 2016
  • During the last two decades, impact factor has been widely used as a journal evaluation metric that differentiates the influence of a specific journal compared with other journals. However, impact factor does not provide a reliable metric between journals in different subject categories. For example, higher impact factors are given to biology and general sciences than those assigned to other traditional engineering and social sciences. This study initially analyzes the trend of the time series of the impact factors of the journals listed in Journal Citation Reports during the last decade. This study then proposes new journal evaluation metrics that adjust the impact factors across different subject categories. The proposed metrics possibly provides a consistent measure to mitigate the differences in impact factors among subject categories. On the basis of experimental results, we recommend the most reliable and appropriate metric to evaluate journals that are less dependent on the characteristics of subject categories.

Improved Collaborative Filtering Using Entropy Weighting

  • Kwon, Hyeong-Joon
    • International Journal of Advanced Culture Technology
    • /
    • v.1 no.2
    • /
    • pp.1-6
    • /
    • 2013
  • In this paper, we evaluate performance of existing similarity measurement metric and propose a novel method using user's preferences information entropy to reduce MAE in memory-based collaborative recommender systems. The proposed method applies a similarity of individual inclination to traditional similarity measurement methods. We experiment on various similarity metrics under different conditions, which include an amount of data and significance weighting from n/10 to n/60, to verify the proposed method. As a result, we confirm the proposed method is robust and efficient from the viewpoint of a sparse data set, applying existing various similarity measurement methods and Significance Weighting.

  • PDF

Explanatory Analysis for South Korea's Political Website Linking - Statistical Aspects

  • Choi, Kyoung-Ho;Park, Han-Woo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.899-911
    • /
    • 2005
  • This paper conducts an explanatory analysis of the web sphere produced by National Assemblymen in South Korea, using some statistical methods. First, some descriptive metrics were employed. Next, the traditional methods of multi-variate analyses, multidimensional scaling and corresponding analysis, were applied to the data. Finally, cross-sectional data were compared to examine a change over time.

  • PDF

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

Improving High-resolution Impedance Manometry Using Novel Viscous and Super-viscous Substrates in the Supine and Upright Positions: A Pilot Study

  • Wong, Uni;Person, Erik B;Castell, Donald O;von Rosenvinge, Erik;Raufman, Jean-Pierre;Xie, Guofeng
    • Journal of Neurogastroenterology and Motility
    • /
    • v.24 no.4
    • /
    • pp.570-576
    • /
    • 2018
  • Background/Aims Swallows with viscous or solid boluses in different body positions alter esophageal manometry patterns. Limitations of previous studies include lack of standardized viscous substrates and the need for chewing prior to swallowing solid boluses. We hypothesize that high-resolution impedance manometry (HRiM) using standardized viscous and super-viscous swallows in supine and upright positions improves sensitivity for detecting esophageal motility abnormalities when compared with traditional saline swallows. To establish normative values for these novel substrates, we recruited healthy volunteers and performed HRiM. Methods Standardized viscous and super-viscous substrates were prepared using "Thick-It" food thickener and a rotational viscometer. All swallows were administered in 5-mL increments in both supine and upright positions. HRiM metrics and impedance (bolus transit) were calculated. We used a paired two-tailed t test to compare all metrics by position and substrate. Results The 5-g, 7-g, and 10-g substrates measured 5000, 36 200, and 64 $700mPa{\cdot}sec$, respectively. In 18 volunteers, we observed that the integrated relaxation pressure was lower when upright than when supine for all substrates (P < 0.01). The 10-g substrate significantly increased integrated relaxation pressure when compared to saline in the supine position (P < 0.01). Substrates and positions also affected distal contractile integral, distal latency, and impedance values. Conclusions We examined HRiM values using novel standardized viscous and super-viscous substrates in healthy subjects for both supine and upright positions. We found that viscosity and position affected HRiM Chicago metrics and have potential to increase the sensitivity of esophageal manometry.

A Study of Estimation for Web Application Complexity (웹 어플리케이션의 복잡도 예측에 관한 연구)

  • Oh Sung-Kyun;Kim Mi-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.27-34
    • /
    • 2004
  • As software developing paradigm has been changing to complicate Web environment, study of complexity becomes vigorous. Yet still it seems that general agreement has not to be reached to architecture or complexity measure of Web application. And so traditional complexity metrics - program size(LOC) and Cyclomatic Complexity can be derived from the source code after implementation. it is not helpful to the early phase of software development life cycle - analysis and design phase. In this study 6 Web projects has been used for deriving applications with possible errors suited by Complexity Indicator. Using 61 programs derived, linear correlation between complexity, number of classes and number of methods has been proposed. As Web application complexity could be estimated before implementation, effort and cost management will be processed more effectively.

  • PDF

A Case for Using Service Availability to Characterize IP Backbone Topologies

  • Keralapura Ram;Moerschell Adam;Chuah Chen Nee;Iannaccone Gianluca;Bhattacharyya Supratik
    • Journal of Communications and Networks
    • /
    • v.8 no.2
    • /
    • pp.241-252
    • /
    • 2006
  • Traditional service-level agreements (SLAs), defined by average delay or packet loss, often camouflage the instantaneous performance perceived by end-users. We define a set of metrics for service availability to quantify the performance of Internet protocol (IP) backbone networks and capture the impact of routing dynamics on packet forwarding. Given a network topology and its link weights, we propose a novel technique to compute the associated service availability by taking into account transient routing dynamics and operational conditions, such as border gateway protocol (BGP) table size and traffic distributions. Even though there are numerous models for characterizing topologies, none of them provide insights on the expected performance perceived by end customers. Our simulations show that the amount of service disruption experienced by similar networks (i.e., with similar intrinsic properties such as average out-degree or network diameter) could be significantly different, making it imperative to use new metrics for characterizing networks. In the second part of the paper, we derive goodness factors based on service availability viewed from three perspectives: Ingress node (from one node to many destinations), link (traffic traversing a link), and network-wide (across all source-destination pairs). We show how goodness factors can be used in various applications and describe our numerical results.

Workflow Oriented Domain Analysis (워크플로우 지향 도메인 분석)

  • Kim Yun-Jeong;Kim Young-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.1
    • /
    • pp.54-63
    • /
    • 2006
  • In this paper we will propose a domain analysis methodology that uses an extended workflow mechanism based on dynamic modeling to solve problems of a traditional domain analysis on legacy systems. This methodology is called WODA(Workflow Oriented Domain Analysis). Following procedures on WODA, we can identify common/uncommon component, and also extract the cluster of components. It will be effectively reusable on developing new systems with these components. With our proposed component testing metrics, we can determine highly reusable component/scenario on identifying possible scenarios of the particular system. We can also recognize most critical/most frequent reusable components and prioritize possible component scenarios of the system. This paper contains one application of UPS that illustrates our autonomous modeling tool, WODA.

  • PDF