• Title/Summary/Keyword: performance metrics

Search Result 800, Processing Time 0.029 seconds

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

An In-depth Analysis and Performance Improvement of a Container Relocation Algorithm

  • Lee, Hyung-Bong;Kwon, Ki-Hyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.9
    • /
    • pp.81-89
    • /
    • 2017
  • The CRP(Container Relocation Problem) algorithms pursuing efficient container relocation of wharf container terminal can not be deterministic because of the large number of layout cases. Therefore, the CRP algorithms should adopt trial and error intuition and experimental heuristic techniques. And because the heuristic can not be best for all individual cases, it is necessary to find metrics which show excellent on average. In this study, we analyze GLAH(Greedy Look-ahead Heuristic) algorithm which is one of the recent researches in detail, and propose a heuristic metrics HOB(sum of the height differences between a badly placed container and the containers prohibited by the badly placed container) to improve the algorithm. The experimental results show that the improved algorithm, GLAH', exerts a stable performance increment of up to 3.8% in our test data, and as the layout size grows, the performance increment gap increases.

Research Status of Machine Learning for Self-Organizing Network - I (Self-Organizing Network에서 기계학습 연구동향-I)

  • Kwon, D.S.;Na, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.103-114
    • /
    • 2020
  • In this study, a machine learning (ML) algorithm is analyzed and summarized as a self-organizing network (SON) realization technology that can minimize expert intervention in the planning, configuration, and optimization of mobile communication networks. First, the basic concept of the ML algorithm in which areas of the SON of this algorithm are applied, is briefly summarized. In addition, the requirements and performance metrics for ML are summarized from the SON perspective, and the ML algorithm that has hitherto been applied to an SON achieves a performance in terms of the SON performance metrics.

Using User Rating Patterns for Selecting Neighbors in Collaborative Filtering

  • Lee, Soojung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.9
    • /
    • pp.77-82
    • /
    • 2019
  • Collaborative filtering is a popular technique for recommender systems and used in many practical commercial systems. Its basic principle is select similar neighbors of a current user and from their past preference information on items the system makes recommendations for the current user. One of the major problems inherent in this type of system is data sparsity of ratings. This is mainly caused from the underlying similarity measures which produce neighbors based on the ratings records. This paper handles this problem and suggests a new similarity measure. The proposed method takes users rating patterns into account for computing similarity, without just relying on the commonly rated items as in previous measures. Performance experiments of various existing measures are conducted and their performance is compared in terms of major performance metrics. As a result, the proposed measure reveals better or comparable achievements in all the metrics considered.

UFKLDA: An unsupervised feature extraction algorithm for anomaly detection under cloud environment

  • Wang, GuiPing;Yang, JianXi;Li, Ren
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.684-695
    • /
    • 2019
  • In a cloud environment, performance degradation, or even downtime, of virtual machines (VMs) usually appears gradually along with anomalous states of VMs. To better characterize the state of a VM, all possible performance metrics are collected. For such high-dimensional datasets, this article proposes a feature extraction algorithm based on unsupervised fuzzy linear discriminant analysis with kernel (UFKLDA). By introducing the kernel method, UFKLDA can not only effectively deal with non-Gaussian datasets but also implement nonlinear feature extraction. Two sets of experiments were undertaken. In discriminability experiments, this article introduces quantitative criteria to measure discriminability among all classes of samples. The results show that UFKLDA improves discriminability compared with other popular feature extraction algorithms. In detection accuracy experiments, this article computes accuracy measures of an anomaly detection algorithm (i.e., C-SVM) on the original performance metrics and extracted features. The results show that anomaly detection with features extracted by UFKLDA improves the accuracy of detection in terms of sensitivity and specificity.

A Generalized Multicarrier Communication System - Part I: Theoretical Performance Analysis and Bounds

  • Imran Ali
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.1-11
    • /
    • 2024
  • This paper develops a generalized framework for the analysis of multicarrier communication system, using a generic pair of transmitters- and receiver side terraforms, Qτ | QR, such that the DFT-transform based "conventional OFDM" is its special case. This analysis framework is then used to propose and prove theorems on various performance metrics of a multicarrier communication system, which will apply to any system that fits the architecture, which most will do. The analysis framework also derives previously unknown closed-form expressions for these metrics, such as how the performance degradation due to carrier frequency offset or timing synchronization error, amongst others, are function of generic transforms. While extensive work exists on the impact of these challenges on conventional OFDM, how are these functions of transform matrices is unknown in the literature. It will be shown, how the analysis of OFDM based system is special case of analysis in this paper. This paper is Part I of three paper series, where the other two supplements the arguments present here.

Six Sigma Business Breakthrough Strategy (6시그마 경영혁신전략)

  • 홍성훈;김상부;권혁무;이민구
    • Journal of Korean Society for Quality Management
    • /
    • v.27 no.1
    • /
    • pp.223-231
    • /
    • 1999
  • The concept of six sigma was introduced at and popularized by Motorola in its quest to reduce defects of manufactured electronics products. When used as a metric, six sigma technically means having no more than 3.4 defects per million opportunities in any process, product, or service. More important than the technical definition is the concept of six sigma as a disciplined, quantitative approach for improvement of defined metrics in manufacturing, service, or financial processes. This approach derives the overall process of selecting the right projects based on their potential to improve performance metrics and selecting and training the right people to get the business results.

  • PDF

A Multi-Class Classifier of Modified Convolution Neural Network by Dynamic Hyperplane of Support Vector Machine

  • Nur Suhailayani Suhaimi;Zalinda Othman;Mohd Ridzwan Yaakub
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we focused on the problem of evaluating multi-class classification accuracy and simulation of multiple classifier performance metrics. Multi-class classifiers for sentiment analysis involved many challenges, whereas previous research narrowed to the binary classification model since it provides higher accuracy when dealing with text data. Thus, we take inspiration from the non-linear Support Vector Machine to modify the algorithm by embedding dynamic hyperplanes representing multiple class labels. Then we analyzed the performance of multi-class classifiers using macro-accuracy, micro-accuracy and several other metrics to justify the significance of our algorithm enhancement. Furthermore, we hybridized Enhanced Convolution Neural Network (ECNN) with Dynamic Support Vector Machine (DSVM) to demonstrate the effectiveness and efficiency of the classifier towards multi-class text data. We performed experiments on three hybrid classifiers, which are ECNN with Binary SVM (ECNN-BSVM), and ECNN with linear Multi-Class SVM (ECNN-MCSVM) and our proposed algorithm (ECNNDSVM). Comparative experiments of hybrid algorithms yielded 85.12 % for single metric accuracy; 86.95 % for multiple metrics on average. As for our modified algorithm of the ECNN-DSVM classifier, we reached 98.29 % micro-accuracy results with an f-score value of 98 % at most. For the future direction of this research, we are aiming for hyperplane optimization analysis.

SSC risk significance in risk-informed, performance-based licensing of non-LWRs

  • James C. Lin
    • Nuclear Engineering and Technology
    • /
    • v.56 no.3
    • /
    • pp.819-823
    • /
    • 2024
  • The main criteria used in NEI 18-04 to define SSCs as risk-significant include (1) the SSC is required to keep all LBEs within the F-C target, and (2) the total frequency with the SSC failed exceeds 1% of the limit for at least one of the three cumulative risk metrics used for evaluating the integrated plant risk. The first one is a reasonable criterion in determining the risk significant SSCs. However, the second criterion may not be adequate to serve the purpose of determining the risk significance of SSCs. In the second criterion, the cumulative risk metric values representing the integrated plant risk (less the preventive and mitigative effects of the SSC being evaluated) are compared to a risk limit that represents a very small contribution to the overall integrated plant risk, which corresponds appropriately to the contributions from individual SSCs. The easiest approach to redefine the NEI 18-04 definition of risk-significant SSCs in relation to the integrated plant risk metrics is to compare the difference, between the risk metric value calculated with the SSC failed and the risk metric value calculated with the SSC credited, with 1% of the risk limit established for the integrated plant risk metrics.

Modeling and Performance Evaluation of AP Deployment Schemes for Indoor Location-Awareness (실내 환경에서 위치 인식율을 고려한 AP 배치 기법의 모델링 및 성능 평가)

  • Kim, Taehoon;Tak, Sungwoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.847-856
    • /
    • 2013
  • This paper presents an AP placement technique considering indoor location-awareness and examines its performance. The proposed AP placement technique is addressed from three performance metrics: location-awareness and AP-based wireless network performance as well as its cost. The proposed AP placement technique consists of meta-heuristic algorithms that yield a near optimal AP configuration for given performance metrics, and deterministic algorithms that improve the fast convergence of the near optimal AP configuration. The performance of the AP placement technique presented in this paper is measured under the environments simulating indoor space, and numerical results obtained by experimental evaluation yield the fast convergence of a near-optimal solution to a given performance metric.