• Title/Summary/Keyword: Choice Probability

Search Result 172, Processing Time 0.024 seconds

Developing the Purchase Conversion Model of the Keyword Advertising Based on the Individual Search (개인검색기반 키워드광고 구매전환모형 개발)

  • Lee, Dong Il;Kim, Hyun Gyo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.1
    • /
    • pp.123-138
    • /
    • 2013
  • Keyword advertising has been used as a promotion tool rather than the advertising itself to online retailers. This is because the online retailer expects the direct sales increase when they deploy the keyword sponsorship. In practice, many online sellers rely on keyword advertising to promote their sales in short term with limited budget. Most of the previous researches use direct revenue factors as dependent variables such as CTR (click through rate) and CVI (conversion per impression) in their researches on the keyword advertising[14, 16, 22, 25, 31, 32]. Previous studies were, however, conducted in the context of aggregate-level due to the limitations on the data availability. These researches cannot evaluate the performance of keyword advertising in the individual level. To overcome these limitations, our research focuses on conversion of keyword advertising in individual-level. Also, we consider manageable factors as independent variables in terms of online retailers (the costs of keyword by implementation methods and meanings of keyword). In our study we developed the keyword advertising conversion model in the individual-level. With our model, we can make some theoretical findings and managerial implications. Practically, in the case of a fixed cost plan, an increase of the number of clicks is revealed as an effective way. However, higher average CPC is not significantly effective in increasing probability of purchase conversion. When this type (fixed cost plan) of implementation could not generate a lot of clicks, it cannot significantly increase the probability of purchase choice. Theoretically, we consider the promotional attributes which influence consumer purchase behavior and conduct individuals-level research based on the actual data. Limitations and future direction of the study are discussed.

A simplified method for estimating the fundamental period of masonry infilled reinforced concrete frames

  • Jiang, Rui;Jiang, Liqiang;Hu, Yi;Ye, Jihong;Zhou, Lingyu
    • Structural Engineering and Mechanics
    • /
    • v.74 no.6
    • /
    • pp.821-832
    • /
    • 2020
  • The fundamental period is an important parameter for seismic design and seismic risk assessment of building structures. In this paper, a simplified theoretical method to predict the fundamental period of masonry infilled reinforced concrete (RC) frame is developed based on the basic theory of engineering mechanics. The different configurations of the RC frame as well as masonry walls were taken into account in the developed method. The fundamental period of the infilled structure is calculated according to the integration of the lateral stiffness of the RC frame and masonry walls along the height. A correction coefficient is considered to control the error for the period estimation, and it is determined according to the multiple linear regression analysis. The corrected formula is verified by shaking table tests on two masonry infilled RC frame models, and the errors between the estimated and test period are 2.3% and 23.2%. Finally, a probability-based method is proposed for the corrected formula, and it allows the structural engineers to select an appropriate fundamental period with a certain safety redundancy. The proposed method can be quickly and flexibly used for prediction, and it can be hand-calculated and easily understood. Thus it would be a good choice in determining the fundamental period of RC frames infilled with masonry wall structures in engineering practice instead of the existing methods.

A Study on the Hydrologic Decision-Making for Drought Management : 2. Decision-Making Method for Drought Management (가뭄관리를 위한 수문학적 의사결정에 관한 연구 : 2. 가뭄관리를 위한 의사결정 방법)

  • Kang, In-Joo;Yoon, Yong-Nam
    • Journal of Korea Water Resources Association
    • /
    • v.35 no.5
    • /
    • pp.597-609
    • /
    • 2002
  • This study suggests a methodology of hydrologic decision making for the establishment of a standard of drought management from the drought analysis by the past drought history and for the drought monitoring and management according to drought processing. The construction and analysis of a decision tree diagram are performed and the step by step plan according to drought severity is suggested. Say, the decision tree diagram is constructed by the transition probability and quantity of monthly precipitation. Then the drought processing is investigated by the analysis of diagram and the 3-step of drought notice, drought warning, and emergency plan are established. The suggested methology in this study can be used for the other area and the decision tree diagram be used by changing the diagram according to the utilization purposes. Also, the choice of monthly PDSI class and precipitation analysis can be performed by the continuous data supplement. And so, a new standard value by the modified diagram is provided and the continuous drought management will be possible.

Dosimetric and Radiobiological Evaluation of Dose Volume Optimizer (DVO) and Progressive Resolution Optimizer (PRO) Algorithm against Photon Optimizer on IMRT and VMAT Plan for Prostate Cancer

  • Kim, Yon-Lae;Chung, Jin-Beom;Kang, Seong-Hee;Eom, Keun-Yong;Song, Changhoon;Kim, In-Ah;Kim, Jae-Sung;Lee, Jeong-Woo
    • Progress in Medical Physics
    • /
    • v.29 no.4
    • /
    • pp.106-114
    • /
    • 2018
  • This study aimed to compare the performance of previous optimization algorithms against new a photon optimizer (PO) algorithm for intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans for prostate cancer. Eighteen patients with prostate cancer were retrospectively selected and planned to receive 78 Gy in 39 fractions of the planning target volume (PTV). All plans for each patient optimized with the dose volume optimizer (DVO) and progressive resolution optimizer (PRO) algorithms for IMRT and VMAT were compared against plans optimized with the PO within Eclipse version 13.7. No interactive action was performed during optimization. Dosimetric and radiobiological indices for the PTV and organs at risk were analyzed. The monitor units (MU) per plan were recorded. Based on the plan quality for the target coverage, prostate IMRT and VMAT plans using the PO showed an improvement over DVO and PRO. In addition, the PO generally showed improvement in the tumor control probability for the PTV and normal tissue control probability for the rectum. From a technical perspective, the PO generated IMRT treatment plans with fewer MUs than DVO, whereas it produced slightly more MUs in the VMAT plan, compared with PRO. The PO showed over potentiality of DVO and PRO whenever available, although it led to more MUs in VMAT than PRO. Therefore, the PO has become the preferred choice for planning prostate IMRT and VMAT at our institution.

A Novel Grasshopper Optimization-based Particle Swarm Algorithm for Effective Spectrum Sensing in Cognitive Radio Networks

  • Ashok, J;Sowmia, KR;Jayashree, K;Priya, Vijay
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.520-541
    • /
    • 2023
  • In CRNs, SS is of utmost significance. Every CR user generates a sensing report during the training phase beneath various circumstances, and depending on a collective process, either communicates or remains silent. In the training stage, the fusion centre combines the local judgments made by CR users by a majority vote, and then returns a final conclusion to every CR user. Enough data regarding the environment, including the activity of PU and every CR's response to that activity, is acquired and sensing classes are created during the training stage. Every CR user compares their most recent sensing report to the previous sensing classes during the classification stage, and distance vectors are generated. The posterior probability of every sensing class is derived on the basis of quantitative data, and the sensing report is then classified as either signifying the presence or absence of PU. The ISVM technique is utilized to compute the quantitative variables necessary to compute the posterior probability. Here, the iterations of SVM are tuned by novel GO-PSA by combining GOA and PSO. Novel GO-PSA is developed since it overcomes the problem of computational complexity, returns minimum error, and also saves time when compared with various state-of-the-art algorithms. The dependability of every CR user is taken into consideration as these local choices are then integrated at the fusion centre utilizing an innovative decision combination technique. Depending on the collective choice, the CR users will then communicate or remain silent.

Probabilistic Distribution and Variability of Geotechnical Properties with Randomness Characteristic (무작위성을 보이는 지반정수의 확률분포 및 변동성)

  • Kim, Dong-Hee;Lee, Ju-Hyoung;Lee, Woo-Jin
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.11
    • /
    • pp.87-103
    • /
    • 2009
  • To determine the reliable probabilistic distribution model of geotechnical properties, outlier and randomness test for analysis data, parameter estimation of probabilistic distribution model, and goodness-of-fit test for model parameter and probabilistic distribution model have to be performed in sequence. In this paper, the probabilistic distribution model's geotechnical properties of Songdo area in Incheon are estimated by the above proposed procedure. Also, the coefficient of variation (COV) representing the variability of geotechnical properties is determined for several geotechnical properties. Reliable probabilistic distribution model and COV of geotechnical properties can be used for probability-based design procedure and reasonable choice of design value in deterministic design method.

A comparison of tests for homoscedasticity using simulation and empirical data

  • Anastasios Katsileros;Nikolaos Antonetsis;Paschalis Mouzaidis;Eleni Tani;Penelope J. Bebeli;Alex Karagrigoriou
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.1-35
    • /
    • 2024
  • The assumption of homoscedasticity is one of the most crucial assumptions for many parametric tests used in the biological sciences. The aim of this paper is to compare the empirical probability of type I error and the power of ten parametric and two non-parametric tests for homoscedasticity with simulations under different types of distributions, number of groups, number of samples per group, variance ratio and significance levels, as well as through empirical data from an agricultural experiment. According to the findings of the simulation study, when there is no violation of the assumption of normality and the groups have equal variances and equal number of samples, the Bhandary-Dai, Cochran's C, Hartley's Fmax, Levene (trimmed mean) and Bartlett tests are considered robust. The Levene (absolute and square deviations) tests show a high probability of type I error in a small number of samples, which increases as the number of groups rises. When data groups display a nonnormal distribution, researchers should utilize the Levene (trimmed mean), O'Brien and Brown-Forsythe tests. On the other hand, if the assumption of normality is not violated but diagnostic plots indicate unequal variances between groups, researchers are advised to use the Bartlett, Z-variance, Bhandary-Dai and Levene (trimmed mean) tests. Assessing the tests being considered, the test that stands out as the most well-rounded choice is the Levene's test (trimmed mean), which provides satisfactory type I error control and relatively high power. According to the findings of the study and for the scenarios considered, the two non-parametric tests are not recommended. In conclusion, it is suggested to initially check for normality and consider the number of samples per group before choosing the most appropriate test for homoscedasticity.

Dynamic Model Considering the Biases in SP Panel data (SP 패널데이터의 Bias를 고려한 동적모델)

  • 남궁문;성수련;최기주;이백진
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.6
    • /
    • pp.63-75
    • /
    • 2000
  • Stated Preference (SP) data has been regarded as more useful than Revealed Preference (RP) data, because researchers can investigate the respondents\` Preference and attitude for a traffic condition or a new traffic system by using the SP data. However, the SP data has two bias: the first one is the bias inherent in SP data and the latter one is the attrition bias in SP panel data. If the biases do not corrected, the choice model using SP data may predict a erroneous future demand. In this Paper, six route choice models are constructed to deal with the SP biases, and. these six models are classified into cross-sectional models (model I∼IH) and dynamic models (model IV∼VI) From the six models. some remarkable results are obtained. The cross-sectional model that incorporate RP choice results of responders with SP cross-sectional model can correct the biases inherent in SP data, and also the dynamic models can consider the temporal variations of the effectiveness of state dependence in SP responses by assuming a simple exponential function of the state dependence. WESML method that use the estimated attrition probability is also adopted to correct the attrition bias in SP Panel data. The results can be contributed to the dynamic modeling of SP Panel data and also useful to predict more exact demand.

  • PDF

A Simplified Procedure for Performance-Based Design

  • Zareian, Farzin;Krawinkler, Helmut
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.11 no.4
    • /
    • pp.13-23
    • /
    • 2007
  • This paper focuses on providing a practical approach for decision making in Performance-Based Design (PBD). Satisfactory performance is defined by several performance objectives that place limits on direct (monetary) loss and on a tolerable probability of collapse. No specific limits are placed on conventional engineering parameters such as forces or deformations, although it is assumed that sound capacity design principles are followed in the design process. The proposed design procedure incorporates different performance objectives up front, before the structural system is created, and assists engineers in making informed decisions on the choice of an effective structural system and its stiffness (period), base shear strength, and other important global structural parameters. The tools needed to implement this design process are (1) hazard curves for a specific ground motion intensity measure, (2) mean loss curves for structural and nonstructural subsystems, (3) structural response curves that relate, for different structural systems, a ground motion intensity measure to the engineering demand parameter (e.g., interstory drift or floor acceleration) on which the subsystem loss depends, and (4) collapse fragility curves. Since the proposed procedure facilitates decision making in the conceptual design process, it is referred to as a Design Decision Support System, DDSS. Implementation of the DDSS is illustrated in an example to demonstrate its practicality.

The Effect of Analysis Variables on the Failure Probability of the Reactor Pressure Vessel by Pressurized Thermal Shock (가압열충격에 의한 원자로 압력용기의 파손확률에 미치는 해석변수의 영향)

  • Jang, Chang-Heui;Jhung, Myung-Jo;Kang, Suk-Chull;Choi, Young-Hwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.6
    • /
    • pp.693-700
    • /
    • 2004
  • The probabilistic fracture mechanics(PFM) is a useful analytical tool to assess the integrity of reactor pressure vessel(RPV) at the event of pressurized thermal shock(PTS). In PFM, the probabilities of flaw initiation and propagation are estimated by comparing the applied stress intensity factor with the fracture toughness calculated by the simulation of various stochastic variables. It is known that the results of PFM analyses are dependent on the choice of the stochastic parameters and assumptions. Of the various variables and assumptions, we investigated the effects of the RT$_{NDT}$ shift equations, fracture toughness curves, and flaw distributions on the PFM results for the three PTS transients. The results showed that the combined effects of the RT$_{NDT}$ shift equations and fracture toughness curves are complicated and dependent on the characteristics of the transients, the chemistry of the materials, the fast neutron fluence, and so on.