• Title/Summary/Keyword: performance metrics

Search Result 764, Processing Time 0.03 seconds

A Dynamical Hybrid CAC Scheme and Its Performance Analysis for Mobile Cellular Network with Multi-Service

  • Li, Jiping;Wu, Shixun;Liu, Shouyin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1522-1545
    • /
    • 2012
  • Call admission control (CAC) plays an important role in mobile cellular network to guarantee the quality of service (QoS). In this paper, a dynamic hybrid CAC scheme with integrated cutoff priority and handoff queue for mobile cellular network is proposed and some performance metrics are derived. The unique characteristic of the proposed CAC scheme is that it can support any number of service types and that the cutoff thresholds for handoff calls are dynamically adjusted according to the number of service types and service priority index. Moreover, timeouts of handoff calls in queues are also considered in our scheme. By modeling the proposed CAC scheme with a one-dimensional Markov chain (1DMC), some performance metrics are derived, which include new call blocking probability ($P_{nb}$), forced termination probability (PF), average queue length, average waiting time in queue, offered traffic utilization, wireless channel utilization and system performance which is defined as the ratio of channel utilization to Grade of Service (GoS) cost function. In order to validate the correctness of the derived analytical performance metrics, simulation is performed. It is shown that simulation results match closely with the derived analytic results in terms of $P_{nb}$ and PF. And then, to show the advantage of 1DMC modeling for the performance analysis of our proposed CAC scheme, the computing complexity of multi-dimensional Markov chain (MDMC) modeling in performance analysis is analyzed in detail. It is indicated that state-space cardinality, which reflects the computing complexity of MDMC, increases exponentially with the number of service types and total channels in a cell. However, the state-space cardinality of our 1DMC model for performance analysis is unrelated to the number of service types and is determined by total number of channels and queue capacity of the highest priority service in a cell. At last, the performance comparison between our CAC scheme and Mahmoud ASH's scheme is carried out. The results show that our CAC scheme performs well to some extend.

A New Metric for Evaluation of Forecasting Methods : Weighted Absolute and Cumulative Forecast Error (수요 예측 평가를 위한 가중절대누적오차지표의 개발)

  • Choi, Dea-Il;Ok, Chang-Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.159-168
    • /
    • 2015
  • Aggregate Production Planning determines levels of production, human resources, inventory to maximize company's profits and fulfill customer's demands based on demand forecasts. Since performance of aggregate production planning heavily depends on accuracy of given forecasting demands, choosing an accurate forecasting method should be antecedent for achieving a good aggregate production planning. Generally, typical forecasting error metrics such as MSE (Mean Squared Error), MAD (Mean Absolute Deviation), MAPE (Mean Absolute Percentage Error), and CFE (Cumulated Forecast Error) are utilized to choose a proper forecasting method for an aggregate production planning. However, these metrics are designed only to measure a difference between real and forecast demands and they are not able to consider any results such as increasing cost or decreasing profit caused by forecasting error. Consequently, the traditional metrics fail to give enough explanation to select a good forecasting method in aggregate production planning. To overcome this limitation of typical metrics for forecasting method this study suggests a new metric, WACFE (Weighted Absolute and Cumulative Forecast Error), to evaluate forecasting methods. Basically, the WACFE is designed to consider not only forecasting errors but also costs which the errors might cause in for Aggregate Production Planning. The WACFE is a product sum of cumulative forecasting error and weight factors for backorder and inventory costs. We demonstrate the effectiveness of the proposed metric by conducting intensive experiments with demand data sets from M3-competition. Finally, we showed that the WACFE provides a higher correlation with the total cost than other metrics and, consequently, is a better performance in selection of forecasting methods for aggregate production planning.

A maximum likelihood sequence detector in impulsive noise environment (충격성 잡음 환경에서의 최우 검출기)

  • 박철희;조용수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.6
    • /
    • pp.1522-1532
    • /
    • 1996
  • In this paper, we compare the performance of channel estimators with the L$_{1}$-norm and L$_{2}$-norm criteria in impulaive noise environment, and show than the L$_{1}$-norm criterion is appropriate for that situation. Also, it is shown that the performance of the conventional maximum likelihood sequence detector(MLSD) can be improved by applying the same principle to mobile channels. That is, the performance of the conventional MLSD, which is known to be optimal under the Gaussian noise assumption, degrades in the impulsive noise of radio mobile communication channels. So, we proposed the MLSD which can reduce the effect of impulsive noise effectively by applying the results of channel estimators. Finally, it is confirmed by computer simulation that the performance of MLSD is significantly affected depending on the types of branch metrics, and that, in the impulsive noise environments, the proposed one with new branch metrics performs better thatn the conventional branch metric, l y(k)-s(k) l$^{[-992]}$ .

  • PDF

A Study on the Adoption Factors and Performance Effects of Mobile Sales Force Automation Systems (모바일 SFA(mSFA) 시스템의 수용 요인 및 도입 성과에 관한 연구)

  • Kim, Dong-Hyun;Lee, Sun-Ro
    • Korean Management Science Review
    • /
    • v.24 no.1
    • /
    • pp.127-145
    • /
    • 2007
  • This study attempts to examine the acceptance factors of mSFA systems based on the innovation diffusion and technology acceptance model, and to measure the performance effects of mSFA systems using BSC metrics. Results show that (1) the characteristics of mobility and interactivity have positive impacts on perceived usefulness, ease of use, and professional fit. But the characteristics of personal identity were not perceived as useful due to users' negative feelings about privacy infringement and surveillance. (2) Job fit has positive impacts on perceived usefulness and professional fit. (3) Perceived usefulness, ease of use, and professional fit positively influence the degree of users' dependence on mSFA systems, which have positive impacts on users' performance measured by the personal BSC metrics including perspectives of finance, customer, internal process, and learning and growth.

Information Requirements for Model-based Monitoring of Construction via Emerging Big Visual Data and BIM

  • Han, Kevin K.;Golparvar-Fard, Mani
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.317-320
    • /
    • 2015
  • Documenting work-in-progress on construction sites using images captured with smartphones, point-and-shoot cameras, and Unmanned Aerial Vehicles (UAVs) has gained significant popularity among practitioners. The spatial and temporal density of these large-scale site image collections and the availability of 4D Building Information Models (BIM) provide a unique opportunity to develop BIM-driven visual analytics that can quickly and easily detect and visualize construction progress deviations. Building on these emerging sources of information this paper presents a pipeline for model-driven visual analytics of construction progress. It particularly focuses on the following key steps: 1) capturing, transferring, and storing images; 2) BIM-driven analytics to identify performance deviations, and 3) visualizations that enable root-cause assessments on performance deviations. The information requirements, and the challenges and opportunities for improvements in data collection, plan preparations, progress deviation analysis particularly under limited visibility, and transforming identified deviations into performance metrics to enable root-cause assessments are discussed using several real world case studies.

  • PDF

$\pi$/4 shift QPSK with Trellis-Code and Lth Phase Different Metrics (Trellis 부호와 L번째 위상차 메트릭(metrics)을 갖는$\pi$/4 shift QPSK)

  • 김종일;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.10
    • /
    • pp.1147-1156
    • /
    • 1992
  • In this paper, in order to apply the $\pi/4$ shift QPSK to TCM, we propose the $\pi/8$ shift 8PSK modulation technique and the trellis-coded $\pi/8$ shift 8PSK performing signal set expansion and partition by phase difference. In addition, the Viterbi decoder with branch metrics of the squared Euclidean distance of the first phase difference as well as the Lth phase different is introduced in order to improve the bit error rate(BER) performance in differential detection of the trellis-coded $\pi/8$ shift 8PSK. The proposed Viterbi decoder is conceptually the same as the sliding multiple detection by using the branch metric with first and Lth order phase difference. We investigate the performance of the uncoded $\pi/4$ shift QPSK and the trellis-coded $\pi/8$ shift 8PSK with or without the Lth phase difference metric in an additive white Gaussian noise (AWGN) using the Monte Carlo simulation. The study shows that the $\pi/4$ shift QPSK with the Trellis-code i.e. the trellis-coded $\pi/8$ shift 8PSK is an attractive scheme for power and bandlimited systems and especially, the Viterbi decoder with first and Lth phase difference metrics improves BER performance. Also, the nest proposed algorithm can be used in the TC $\pi/8$ shift 8PSK as well as TCMDPSK.

  • PDF

An Effective Multivariate Control Framework for Monitoring Cloud Systems Performance

  • Hababeh, Ismail;Thabain, Anton;Alouneh, Sahel
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.86-109
    • /
    • 2019
  • Cloud computing systems' performance is still a central focus of research for determining optimal resource utilization. Running several existing benchmarks simultaneously serves to acquire performance information from specific cloud system resources. However, the complexity of monitoring the existing performance of computing systems is a challenge requiring an efficient and interactive user directing performance-monitoring system. In this paper, we propose an effective multivariate control framework for monitoring cloud systems performance. The proposed framework utilizes the hardware cloud systems performance metrics, collects and displays the performance measurements in terms of meaningful graphics, stores the graphical information in a database, and provides the data on-demand without requiring a third party software. We present performance metrics in terms of CPU usage, RAM availability, number of cloud active machines, and number of running processes on the selected machines that can be monitored at a high control level by either using a cloud service customer or a cloud service provider. The experimental results show that the proposed framework is reliable, scalable, precise, and thus outperforming its counterparts in the field of monitoring cloud performance.

EEPERF(Experiential Education PERFormance): An Instrument for Measuring Service Quality in Experiential Education (체험형 교육 서비스 품질 측정 항목에 관한 연구: 창의적 체험활동을 중심으로)

  • Park, Ky-Yoon;Kim, Hyun-Sik
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.43-52
    • /
    • 2012
  • As experiential education services are growing, the need for proper management is increasing. Considering that adequate measures are an essential factor for achieving success in managing something, it is important for managers to use a proper system of metrics to measure the performance of experiential education services. However, in spite of this need, little research has been done to develop a valid and reliable set of metrics for assessing the quality of experiential education services. The current study aims to develop a multi-item instrument for assessing the service quality of experiential education. The specific procedure is as follows. First, we generated a pool of possible metrics based on diverse literature on service quality. We elicited possiblemetric items not only from general service quality metrics such as SERVQUAL and SERVPERF but also from educational service quality metrics such as HEdPERF and PESPERF. Second, specialist teachers in the experiential education area screened the initial metrics to boost face validity. Third, we proceeded with multiple rounds of empirical validation of those metrics. Based on this processes, we refined the metrics to determine the final metrics to be used. Fourth, we examined predictive validity by checking the well-established positive relationship between each dimension of metrics and customer satisfaction. In sum, starting with the initial pool of scale items elicited from the previous literature and purifying them empirically through the surveying method, we developed a four-dimensional systemized scale to measure the superiority of experiential education and named it "Experiential Education PERFormance" (EEPERF). Our findings indicate that students (consumers) perceive the superiority of the experiential education (EE) service in the following four dimensions: EE-empathy, EE-reliability, EE-outcome, and EE-landscape. EE-empathy is a judgment in response to the question, "How empathetically does the experiential educational service provider interact with me?" Principal measures are "How well does the service provider understand my needs?," and "How well does the service provider listen to my voice?" Next, EE-reliability is a judgment in response to the question, "How reliably does the experiential educational service provider interact with me?" Major measures are "How reliable is the schedule here?," and "How credible is the service provider?" EE-outcome is a judgmentin response to the question, "What results could I get from this experiential educational service encounter?" Representative measures are "How good is the information that I will acquire form this service encounter?," and "How useful is this service encounter in helping me develop creativity?" Finally, EE-landscape is a judgment about the physical environment. Essential measures are "How convenient is the access to the service encounter?,"and "How well managed are the facilities?" We showed the reliability and validity of the system of metrics. All four dimensions influence customer satisfaction significantly. Practitioners may use the results in planning experiential educational service programs and evaluating each service encounter. The current study isexpected to act as a stepping-stone for future scale improvement. In this case, researchers may use the experience quality paradigm that has recently arisen.

  • PDF

Practical Quality Model for Measuring Service Performance in SOA (SOA 서비스 성능 측정을 위한 실용적 품질모델)

  • Oh, Sang-Hun;Choi, Si-Won;Kim, Soo-Dong
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.235-246
    • /
    • 2008
  • Service-Oriented Architecture (SOA) is emerging as an effective approach for developing applications by dynamically discovering and composing reusable services. Generally, the benefits of SOA are known as low-development cost, high agility, high scalability, business level reuse, etc. However, a representative problem for widely applying SOA is the performance problem. This is caused by the nature of SOA such as service deployment and execution in distributed environment, heterogeneity of service platforms, use of a standard message format, etc. Therefore, performance problem has to be overcome to effectively apply SOA, and service performance has to be measured precisely to analyze where and why the problem has occurred. Prerequisite for this is a definition of a quality model to effectively measure service performance. However, current works on service performance lacks in defining a practical and precise quality model for measuring performance which adequately addresses the execution environment and features of SOA. Hence, in this paper, we define a quality model which includes a set of practical metrics for measuring service performance and an effective technique to measure the value of the proposed metrics. In addition, we apply the metrics for Hotel Reservation Service System (HRSS) to show the practicability and usefulness of the proposed metrics.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.