• Title/Summary/Keyword: Quality metric

Search Result 370, Processing Time 0.026 seconds

Neural and MTS Algorithms for Feature Selection

  • Su, Chao-Ton;Li, Te-Sheng
    • International Journal of Quality Innovation
    • /
    • v.3 no.2
    • /
    • pp.113-131
    • /
    • 2002
  • The relationships among multi-dimensional data (such as medical examination data) with ambiguity and variation are difficult to explore. The traditional approach to building a data classification system requires the formulation of rules by which the input data can be analyzed. The formulation of such rules is very difficult with large sets of input data. This paper first describes two classification approaches using back-propagation (BP) neural network and Mahalanobis distance (MD) classifier, and then proposes two classification approaches for multi-dimensional feature selection. The first one proposed is a feature selection procedure from the trained back-propagation (BP) neural network. The basic idea of this procedure is to compare the multiplication weights between input and hidden layer and hidden and output layer. In order to simplify the structure, only the multiplication weights of large absolute values are used. The second approach is Mahalanobis-Taguchi system (MTS) originally suggested by Dr. Taguchi. The MTS performs Taguchi's fractional factorial design based on the Mahalanobis distance as a performance metric. We combine the automatic thresholding with MD: it can deal with a reduced model, which is the focus of this paper In this work, two case studies will be used as examples to compare and discuss the complete and reduced models employing BP neural network and MD classifier. The implementation results show that proposed approaches are effective and powerful for the classification.

A Network Coding-Aware Routing Mechanism for Time-Sensitive Data Delivery in Multi-Hop Wireless Networks

  • Jeong, Minho;Ahn, Sanghyun
    • Journal of Information Processing Systems
    • /
    • v.13 no.6
    • /
    • pp.1544-1553
    • /
    • 2017
  • The network coding mechanism has attracted much attention because of its advantage of enhanced network throughput which is a desirable characteristic especially in a multi-hop wireless network with limited link capacity such as the device-to-device (D2D) communication network of 5G. COPE proposes to use the XOR-based network coding in the two-hop wireless network topology. For multi-hop wireless networks, the Distributed Coding-Aware Routing (DCAR) mechanism was proposed, in which the coding conditions for two flows intersecting at an intermediate node are defined and the routing metric to improve the coding opportunity by preferring those routes with longer queues is designed. Because the routes with longer queues may increase the delay, DCAR is inefficient in delivering real-time multimedia traffic flows. In this paper, we propose a network coding-aware routing protocol for multi-hop wireless networks that enhances DCAR by considering traffic load distribution and link quality. From this, we can achieve higher network throughput and lower end-to-end delay at the same time for the proper delivery of time-sensitive data flow. The Qualnet-based simulation results show that our proposed scheme outperforms DCAR in terms of throughput and delay.

Categorical Data Clustering Analysis Using Association-based Dissimilarity (연관성 기반 비유사성을 활용한 범주형 자료 군집분석)

  • Lee, Changki;Jung, Uk
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.2
    • /
    • pp.271-281
    • /
    • 2019
  • Purpose: The purpose of this study is to suggest a more efficient distance measure taking into account the relationship between categorical variables for categorical data cluster analysis. Methods: In this study, the association-based dissimilarity was employed to calculate the distance between two categorical data observations and the distance obtained from the association-based dissimilarity was applied to the PAM cluster algorithms to verify its effectiveness. The strength of association between two different categorical variables can be calculated using a mixture of dissimilarities between the conditional probability distributions of other categorical variables, given these two categorical values. In particular, this method is suitable for datasets whose categorical variables are highly correlated. Results: The simulation results using several real life data showed that the proposed distance which considered relationships among the categorical variables generally yielded better clustering performance than the Hamming distance. In addition, as the number of correlated variables was increasing, the difference in the performance of the two clustering methods based on different distance measures became statistically more significant. Conclusion: This study revealed that the adoption of the relationship between categorical variables using our proposed method positively affected the results of cluster analysis.

Metric based Performance Measurement of Software Development Methodologies from Traditional to DevOps Automation Culture

  • Poonam Narang;Pooja Mittal
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.107-114
    • /
    • 2023
  • Successful implementations of DevOps practices significantly improvise software efficiency, collaboration and security. Most of the organizations are adopting DevOps for faster and quality software delivery. DevOps brings development and operation teams together to overcome all kind of communication gaps responsible for software failures. It relies on different sets of alternative tools to automate the tasks of continuous integration, testing, delivery, deployment and monitoring. Although DevOps is followed for being very reliable and responsible environment for quality software delivery yet it lacks many quantifiable aspects to prove it on the top of other traditional and agile development methods. This research evaluates quantitative performance of DevOps and traditional/ agile development methods based on software metrics. This research includes three sample projects or code repositories to quantify the results and for DevOps integrated selective tool chain; current research considers our earlier proposed and implemented DevOps hybrid model of integrated automation tools. For result discussion and validation, tabular and graphical comparisons have also been included to retrieve best performer model. This comparative and evaluative research will be of much advantage to our young researchers/ students to get well versed with automotive environment of DevOps, latest emerging buzzword of development industries.

Newly-designed adaptive non-blind deconvolution with structural similarity index in single-photon emission computed tomography

  • Kyuseok Kim;Youngjin Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4591-4596
    • /
    • 2023
  • Single-photon emission computed tomography SPECT image reconstruction methods have a significant influence on image quality, with filtered back projection (FBP) and ordered subset expectation maximization (OSEM) being the most commonly used methods. In this study, we proposed newly-designed adaptive non-blind deconvolution with a structural similarity (SSIM) index that can take advantage of the FBP and OSEM image reconstruction methods. After acquiring brain SPECT images, the proposed image was obtained using an algorithm that applied the SSIM metric, defined by predicting the distribution and amount of blurring. As a result of the contrast to noise ratio (CNR) and coefficient of variation evaluation (COV), the resulting image of the proposed algorithm showed a similar trend in spatial resolution to that of FBP, while obtaining values similar to those of OSEM. In addition, we confirmed that the CNR and COV values of the proposed algorithm improved by approximately 1.69 and 1.59 times, respectively, compared with those of the algorithm involving an inappropriate deblurring process. To summarize, we proposed a new type of algorithm that combines the advantages of SPECT image reconstruction techniques and is expected to be applicable in various fields.

Six Color Separation Using Additional Colorants and Quantitative Granularity Metric for Photography Quality (고화질 색 재현을 위한 추가적인 잉크와 정량적인 낟알 무의 측정자를 이용한 6색 분리)

  • Son Chang-Hwan;Cho Yang-Ho;Kwon Oh-Seol;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.49-59
    • /
    • 2005
  • This paper proposed a six-color separation using additional colorants and quantitative granularity metric to reduce color difference and graininess. In the conventional method, light magenta and light cyan are used in the bright region instead of magenta and cyan. However, the hue value of liBht magenta and light cyan is different from the one of magenta and cyan in CIELAB space, so that this makes the colorimetric reproduction more or less inaccurate. To improve this inaccuracy, the proposed method uses yellow and light magenta colorants as the additional colorants. In the bright region, magenta is replaced with light magenta and yellow, while cyan is replaced with light cyan and light magenta. This selection reduces hue difference because it creates the color of similar hue to magenta and cyan. In addition, smooth image can be simultaneously obtained by the less dot visibility of additional colorants. In the middle region, magenta is replaced with light magenta and magenta, while cyan is replaced with light cyan and cyan. The use of two colorants having a different concentration makes the dot Pattern coarse. To reflect this Phenomenon, quantitative granularity metric is used. In the dark region, only magenta and cyan colorant is used as usual. Through experiments, it is shown that the proposed method improves both colorimetric and smooth tone reproductions.

Generating FE Mesh Automatically from STL File Model (STL 파일 모델로부터 유한 요소망 자동 생성)

  • Park, Jung-Min;Kwon, Ki-Youn;Lee, Byung-Chai;Chae, Soo-Won
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.7 s.262
    • /
    • pp.739-746
    • /
    • 2007
  • Recently, models in STL files are widely used in reverse engineering processes, CAD systems and analysis systems. However the models have poor geometric quality and include only triangles, so the models are not suitable for the finite element analysis. This paper presents a general method that generates finite element mesh from STL file models. Given triangular meshes, the method estimates triangles and makes clusters which consist of triangles. The clusters are merged by some geometric indices. After merging clusters, the method applies plane meshing algorithm, based on domain decomposition method, to each cluster and then the result plane mesh is projected into the original triangular set. Because the algorithm uses general methods to generate plane mesh, we can obtain both tri and quad meshes unlike previous researches. Some mechanical part models are used to show the validity of the proposed method.

Utility-based Resource Allocation with Bipartite Matching in OFDMA-based Wireless Systems

  • Zheng, Kan;Li, Wei;Liu, Fei;Xiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.8
    • /
    • pp.1913-1925
    • /
    • 2012
  • In order to efficiently utilize limited radio resources, resource allocation schemes in OFDMA-based wireless networks have gained intensive attention recently. Instead of improving the throughput performance, the utility is adopted as the metric for resource allocation, which provides reasonable methods to build up the relationship between user experience and various quality-of-service (QoS) metrics. After formulating the optimization problem by using a weighted bipartite graph, a modified bipartite matching method is proposed to find a suboptimal solution for the resource allocation problem in OFDMA-based wireless systems with feasible computational complexity. Finally, simulation results are presented to validate the effectiveness of the proposed method.

Channel Coding-Aided Multi-Hop Transmission for Throughput Enhancement

  • Hwang, Inchul;Wang, Hanho
    • International Journal of Contents
    • /
    • v.12 no.1
    • /
    • pp.65-69
    • /
    • 2016
  • Wireless communication chipsets have fixed transmission rate and communication distance. Although there are many kinds of chipsets with throughput and distance purpose, they cannot support various types of wireless applications. This paper provides theoretic research results in order to support various wireless applications requiring different throughput, delayed quality-of-service (QoS), and different communication distances by using a wireless communication chipset with fixed rate and transmission power. As a performance metric, the probability for a data frame that successfully receives at a desired receiver is adopted. Based on this probability, the average number of transmission in order to make a successful frame transmission is derived. Equations are utilized to analyze the performance of a single-hop with channel coding and a dual-hop without error correction matter transmission system. Our results revealed that single-hop transmission assisted by channel coding could extend its communication distance. However, communication range extending effect of the single-hop system was limited. Accordingly, dual-hop transmission is needed to overcome the communication distance limit of a chipset.

A Novel Filter ed Bi-Histogram Equalization Method

  • Sengee, Nyamlkhagva;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.6
    • /
    • pp.691-700
    • /
    • 2015
  • Here, we present a new framework for histogram equalization in which both local and global contrasts are enhanced using neighborhood metrics. When checking neighborhood information, filters can simultaneously improve image quality. Filters are chosen depending on image properties, such as noise removal and smoothing. Our experimental results confirmed that this does not increase the computational cost because the filtering process is done by our proposed arrangement of making the histogram while checking neighborhood metrics simultaneously. If the two methods, i.e., histogram equalization and filtering, are performed sequentially, the first method uses the original image data and next method uses the data altered by the first. With combined histogram equalization and filtering, the original data can be used for both methods. The proposed method is fully automated and any spatial neighborhood filter type and size can be used. Our experiments confirmed that the proposed method is more effective than other similar techniques reported previously.