• Title/Summary/Keyword: size metrics

Search Result 104, Processing Time 0.027 seconds

The Characteristics and Implementations of Quality Metrics for Analyzing Innovation Effects in Six Sigma Projects (식스시그마 프로젝트 사례에서 혁신효과 분석을 위한 품질척도의 특성 및 적용)

  • Choi, Sungwoon
    • Journal of the Korea Safety Management & Science
    • /
    • v.16 no.1
    • /
    • pp.169-176
    • /
    • 2014
  • This research discusses the characteristics and the implementation strategies for two types of quality metrics to analyze innovation effects in six sigma projects: fixed specification type and moving specification type. $Z_{st}$, $P_{pk}$ are quality metrics of fixed specification type that are influenced by predetermined specification. In contrast, the quality metrics of moving specification type such as Strictly Standardized Mean Difference(SSMD), Z-Score, F-Statistic and t-Statistic are independent from predetermined specification. $Z_{st}$ sigma level obtains defective rates of Parts Per Million(PPM) and Defects Per Million Opportunities(DPMO). However, the defective rates between different industrial sectors are incomparable due to their own technological inherence. In order to explore relative method to compare defective rates between different industrial sectors, the ratio of specification and natural tolerance called, $P_{pk}$, is used. The drawback of this $P_{pk}$ metric is that it is highly dependent on the specification. The metrics of F-Statistic and t-Statistic identify innovation effect by comparing before-and-after of accuracy and precision. These statistics are not affected by specification, but affected by type of statistical distribution models and sample size. Hence, statistical significance determined by above two statistics cannot give a same conclusion as practical significance. In conclusion, SSMD and Z-Score are the best quality metrics that are uninfluenced by fixed specification, theoretical distribution model and arbitrary sample size. Those metrics also identify the innovation effects for before-and-after of accuracy and precision. It is beneficial to use SSMD and Z-Score methods along with popular methods of $Z_{st}$ sigma level and $P_{pk}$ that are commonly employed in six sigma projects. The case studies from national six sigma contest from 2011 to 2012 are proposed and analyzed to provide the guidelines for the usage of quality metrics for quality practitioners.

An Empirical Study of Software Size Estimation Techniques by Use Case (Use Case에 의한 소프트웨어 규모 예측 방법에 대한 실증적 연구)

  • 서예영;이남용
    • The Journal of Society for e-Business Studies
    • /
    • v.6 no.2
    • /
    • pp.143-157
    • /
    • 2001
  • There has been a need for predicting development efforts and costs of the system during the early stage of the software process and hundreds of metrics have been proposed for computer software, but not all provide practical support to the software engineer. Some demand measurement that is too complex, others are so esoteric that few real-world professionals have any hope of understanding them, and others violate the basic intuitive notions of what high-quality software really is. It is worthwhile that metrics should be tailored to best accommodate specific products and processes after grasping their good and no good point. This paper describes two size estimation techniques, the Karner technique and the Marchesi technique, and compares and analyzes them with proposed evaluation criteria. Both techniques are to estimate software size analyzed by use case that is mainly described during the object-oriented analysis phase. We also present an empirical comparison of them, both are applied in the Internet Medicine Prescription System. We also propose some guidance for experiments based on our analysis. We believe that it should be facilitating project management more effective by adjusting software metrics properly.

  • PDF

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Software Metric for CBSE Model

  • Iyyappan. M;Sultan Ahmad;Shoney Sebastian;Jabeen Nazeer;A.E.M. Eljialy
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.187-193
    • /
    • 2023
  • Large software systems are being produced with a noticeably higher level of quality with component-based software engineering (CBSE), which places a strong emphasis on breaking down engineered systems into logical or functional components with clearly defined interfaces for inter-component communication. The component-based software engineering is applicable for the commercial products of open-source software. Software metrics play a major role in application development which improves the quantitative measurement of analyzing, scheduling, and reiterating the software module. This methodology will provide an improved result in the process, of better quality and higher usage of software development. The major concern is about the software complexity which is focused on the development and deployment of software. Software metrics will provide an accurate result of software quality, risk, reliability, functionality, and reusability of the component. The proposed metrics are used to assess many aspects of the process, including efficiency, reusability, product interaction, and process complexity. The details description of the various software quality metrics that may be found in the literature on software engineering. In this study, it is explored the advantages and disadvantages of the various software metrics. The topic of component-based software engineering is discussed in this paper along with metrics for software quality, object-oriented metrics, and improved performance.

Metrics Measuring a Quality based on Object-Oriented Design Characteristics (객체지향 설계의 특성을 고려한 품질 평가 메트릭스)

  • Kim, Yu-Kyung;Park, Jai-Nyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.373-384
    • /
    • 2000
  • There are many researches about metrics to measure a quality of Object-Oriented(OO) software. However, most of them have only discussed a concept or properties of metrics, and have not shown the detailed procedure for measuring them. They also define a measurement indicator as a threshold, but it has been influenced on a project size or application domains. In this paper, we propose metrics based on characteristics of OO design such as size, complexity, coupling and cohesion, and use a propotion to an average as the measurement indicator. It is easy to classify classes which have a result above the average, and to predict classes which reduced the quality of OO design. They will be modified to hold the average. Proposed metrics are analytically evaluated by Weyuker's nine properties. They are satisfied with seven properties except two properties co not apply to OO metrics. Also, we design a quality assessment system, ASSOD(ASsessment System of Object oriented Design), to measure the quality of the OO design independent of the platform.

  • PDF

A Prediction Model for Software Change using Object-oriented Metrics (객체지향 메트릭을 이용한 변경 발생에 대한 예측 모형)

  • Lee, Mi-Jung;Chae, Heung-Seok;Kim, Tae-Yeon
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.603-615
    • /
    • 2007
  • Software changes for various kinds of reasons and they increase maintenance cost. Software metrics, as quantitative values about attributes of software, have been adopted for predicting maintenance cost and fault-proneness. This paper proposes relationship between some typical object-oriented metrics and software changes in industrial settings. We used seven metrics which are concerned with size, complexity coupling, inheritance and polymorphism, and collected data about the number of changes during the development of an Information system on .NET platform. Based on them, this paper proposes a model for predicting the number of changes from the object-oriented metrics using multiple regression analysis technique.

A Methodology for Performance Testing of Ethernet Switch (Layer 3 이더넷 스위치 성능 시험 방법론 연구)

  • 김용선
    • Proceedings of the IEEK Conference
    • /
    • 2000.11a
    • /
    • pp.441-444
    • /
    • 2000
  • This paper covers the performance testing for layer 3 Ethernet switch based on various methodologies by which we can measure essential metrics such as throughput, latency, frame loss rate, and back to back frames. In the first place, layer 2 and layer 3 switch evolution is introduced followed by description of IP packet switching in layer 3 switch. And then, the above test metrics and test methodologies are illustrated as well. At last, we conduct the performance testing for layer 3 switch in case of transmitting packets of 64, 128, 256, 512, 1024, 1280, and 1518 byte size and analyze then results.

  • PDF

Performance and Energy Consumption Analysis of 802.11 with FEC Codes over Wireless Sensor Networks

  • Ahn, Jong-Suk;Yoon, Jong-Hyuk;Lee, Kang-Woo
    • Journal of Communications and Networks
    • /
    • v.9 no.3
    • /
    • pp.265-273
    • /
    • 2007
  • This paper expands an analytical performance model of 802.11 to accurately estimate throughput and energy demand of 802.11-based wireless sensor network (WSN) when sensor nodes employ Reed-Solomon (RS) codes, one of block forward error correction (FEC) techniques. This model evaluates these two metrics as a function of the channel bit error rate (BER) and the RS symbol size. Since the basic recovery unit of RS codes is a symbol not a bit, the symbol size affects the WSN performance even if each packet carries the same amount of FEC check bits. The larger size is more effective to recover long-lasting error bursts although it increases the computational complexity of encoding and decoding RS codes. For applying the extended model to WSNs, this paper collects traffic traces from a WSN consisting of two TIP50CM sensor nodes and measures its energy consumption for processing RS codes. Based on traces, it approximates WSN channels with Gilbert models. The computational analyses confirm that the adoption of RS codes in 802.11 significantly improves its throughput and energy efficiency of WSNs with a high BER. They also predict that the choice of an appropriate RS symbol size causes a lot of difference in throughput and power waste over short-term durations while the symbol size rarely affects the long-term average of these metrics.

Mapping Particle Size Distributions into Predictions of Properties for Powder Metal Compacts

  • German, Randall M.
    • Proceedings of the Korean Powder Metallurgy Institute Conference
    • /
    • 2006.09b
    • /
    • pp.704-705
    • /
    • 2006
  • Discrete element analysis is used to map various log-normal particle size distributions into measures of the in-sphere pore size distribution. Combinations evaluated range from monosized spheres to include bimodal mixtures and various log-normal distributions. The latter proves most useful in providing a mapping of one distribution into the other (knowing the particle size distribution we want to predict the pore size distribution). Such metrics show predictions where the presence of large pores is anticipated that need to be avoided to ensure high sintered properties.

  • PDF

Counting What Will Count: How to Empirically Select Leading Performance Indicator

  • Pauwels, Koen;Joshi, Amit
    • Asia-Pacific Journal of Business
    • /
    • v.2 no.2
    • /
    • pp.1-35
    • /
    • 2011
  • Facing information overload in today's complex environments, managers look to a concise set of marketing metrics to provide direction for marketing decision making. While there have been several papers dealing with the theoretical aspects of dashboard creation, no research creates and tests a dashboard using scientific techniques. This study develops and demonstrates an empirical approach to dashboard metric selection. In a fast moving consumer goods category, this research selects leading indicators for national-brand and store-brand sales and revenue premium performance from 99 brand-specific and relative-to-competition variables including price, brand equity, usage occasions, and multiple measures of awareness, trial/usage, purchase intent, and liking/satisfaction. Plotting impact size and wear-in time reveals that different kinds of variables predict sales at distinct lead times, which implies that managerial action may be taken to turn the metrics around before performance itself declines.

  • PDF