A statistical procedure is developed to estimate the relative difference between two parameters each obtained from either true model or approximate model. Double sample procedure is applied to find the additional number of simulation runs satisfying the preassigned absolute precision of the confidence interval. Two types of parameters, mean and standard deviation, are considered as the performance measures and tried to show the validity of the model by examining both queues and inventory systems. In each system it is assumed that there are three distinct means and their own standard deviations and they form the simultaneous confidence intervals but with control in the sense that the absolute precision for each confidence interval is bounded on the limits with preassigned confidence level. The results of this study may contribute to some situations, for instance, first, we need a statistical method to compare the effectiveness between two alternatives, second, we find the adquate number of replications with any level of absolute precision to avoid the unrealistic cost of running simulation models, third, we are interested in analyzing the standard deviation of the output measure, ..., etc.