• 제목/요약/키워드: size metrics

검색결과 104건 처리시간 0.027초

식스시그마 프로젝트 사례에서 혁신효과 분석을 위한 품질척도의 특성 및 적용 (The Characteristics and Implementations of Quality Metrics for Analyzing Innovation Effects in Six Sigma Projects)

  • 최성운
    • 대한안전경영과학회지
    • /
    • 제16권1호
    • /
    • pp.169-176
    • /
    • 2014
  • This research discusses the characteristics and the implementation strategies for two types of quality metrics to analyze innovation effects in six sigma projects: fixed specification type and moving specification type. $Z_{st}$, $P_{pk}$ are quality metrics of fixed specification type that are influenced by predetermined specification. In contrast, the quality metrics of moving specification type such as Strictly Standardized Mean Difference(SSMD), Z-Score, F-Statistic and t-Statistic are independent from predetermined specification. $Z_{st}$ sigma level obtains defective rates of Parts Per Million(PPM) and Defects Per Million Opportunities(DPMO). However, the defective rates between different industrial sectors are incomparable due to their own technological inherence. In order to explore relative method to compare defective rates between different industrial sectors, the ratio of specification and natural tolerance called, $P_{pk}$, is used. The drawback of this $P_{pk}$ metric is that it is highly dependent on the specification. The metrics of F-Statistic and t-Statistic identify innovation effect by comparing before-and-after of accuracy and precision. These statistics are not affected by specification, but affected by type of statistical distribution models and sample size. Hence, statistical significance determined by above two statistics cannot give a same conclusion as practical significance. In conclusion, SSMD and Z-Score are the best quality metrics that are uninfluenced by fixed specification, theoretical distribution model and arbitrary sample size. Those metrics also identify the innovation effects for before-and-after of accuracy and precision. It is beneficial to use SSMD and Z-Score methods along with popular methods of $Z_{st}$ sigma level and $P_{pk}$ that are commonly employed in six sigma projects. The case studies from national six sigma contest from 2011 to 2012 are proposed and analyzed to provide the guidelines for the usage of quality metrics for quality practitioners.

Use Case에 의한 소프트웨어 규모 예측 방법에 대한 실증적 연구 (An Empirical Study of Software Size Estimation Techniques by Use Case)

  • 서예영;이남용
    • 한국전자거래학회지
    • /
    • 제6권2호
    • /
    • pp.143-157
    • /
    • 2001
  • There has been a need for predicting development efforts and costs of the system during the early stage of the software process and hundreds of metrics have been proposed for computer software, but not all provide practical support to the software engineer. Some demand measurement that is too complex, others are so esoteric that few real-world professionals have any hope of understanding them, and others violate the basic intuitive notions of what high-quality software really is. It is worthwhile that metrics should be tailored to best accommodate specific products and processes after grasping their good and no good point. This paper describes two size estimation techniques, the Karner technique and the Marchesi technique, and compares and analyzes them with proposed evaluation criteria. Both techniques are to estimate software size analyzed by use case that is mainly described during the object-oriented analysis phase. We also present an empirical comparison of them, both are applied in the Internet Medicine Prescription System. We also propose some guidance for experiments based on our analysis. We believe that it should be facilitating project management more effective by adjusting software metrics properly.

  • PDF

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Software Metric for CBSE Model

  • Iyyappan. M;Sultan Ahmad;Shoney Sebastian;Jabeen Nazeer;A.E.M. Eljialy
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.187-193
    • /
    • 2023
  • Large software systems are being produced with a noticeably higher level of quality with component-based software engineering (CBSE), which places a strong emphasis on breaking down engineered systems into logical or functional components with clearly defined interfaces for inter-component communication. The component-based software engineering is applicable for the commercial products of open-source software. Software metrics play a major role in application development which improves the quantitative measurement of analyzing, scheduling, and reiterating the software module. This methodology will provide an improved result in the process, of better quality and higher usage of software development. The major concern is about the software complexity which is focused on the development and deployment of software. Software metrics will provide an accurate result of software quality, risk, reliability, functionality, and reusability of the component. The proposed metrics are used to assess many aspects of the process, including efficiency, reusability, product interaction, and process complexity. The details description of the various software quality metrics that may be found in the literature on software engineering. In this study, it is explored the advantages and disadvantages of the various software metrics. The topic of component-based software engineering is discussed in this paper along with metrics for software quality, object-oriented metrics, and improved performance.

객체지향 설계의 특성을 고려한 품질 평가 메트릭스 (Metrics Measuring a Quality based on Object-Oriented Design Characteristics)

  • 김유경;박재년
    • 한국정보처리학회논문지
    • /
    • 제7권2호
    • /
    • pp.373-384
    • /
    • 2000
  • 지금까지의 객체지향 메트릭스에 대한 연구는 단순히 품질의 일부 요소만을 다루고 있으며, 대부분의 메트릭스는 클래스 사이의 관계 정보만을 기반으로 제안되어 왔다. 이로써 객체지향 설계의 특성을 충분히 반영하지 못하고 있다. 또한 기존의 객체지향 메트릭스는 각 메트릭의 계산 결과를 평가하기 위하여 제한값(threshold)을 제공하고 있으며, 형식적으로 정의되지 않은 것이 대부분이다. 이들의 문제점은 계산 과정이 복잡하여 쉽게 적용할 수 없고 프로젝트의 성격이나 소프트웨어의 특성에 따라 제한 값이 달라질 수 있다는 것이다. 이에 본 논문에서는 객체지향 설계의 특성인 크기, 복잡도, 결합도 및 응집도를 고려하여 설계의 품질을 평가하기 위한 메트릭 집합을 제시한다. 제시된 메트릭 집합은 평균값에 대한 비율(proportion)을 사용하여 평균값을 상회하는 클래스 및 설계 요소들을 쉽게 파악할 수 있도록 하였다. 이것은 설계 품질을 저하시키고 있는 클래스를 찾아내어 평균값에 근접한 수준을 h다시 설계할 수 있도록 함으로써, 구현하는 동안 직면하는 설계 결점을 개발 초기에 발견할 수 있도록 하였다. 본 논문에서 정의한 메트릭 집합은 측정원칙에 의하여 분석적으로 평가된다. 그 결과로서 메트릭에 대하여 요구되는 성질의 대부분을 만족하고 있음을 알 수 있다. 도한 플랫폼과 무관하게 사용할 수 있도록 웹 브라우저 및 자바 애플릿으로 개발되어 분산 인트라넷 환경을 지원하는 평가 도구 ASSOD(ASsessment System of Object oriented Design)를 설계한다.

  • PDF

객체지향 메트릭을 이용한 변경 발생에 대한 예측 모형 (A Prediction Model for Software Change using Object-oriented Metrics)

  • 이미정;채흥석;김태연
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권7호
    • /
    • pp.603-615
    • /
    • 2007
  • 다양한 이유로 소프트웨어는 변경이 될 수 있으며 이는 유지보수 비용의 상승을 초래한다. 소프트웨어 메트릭은 클래스의 특성에 대한 정량적인 값으로서 유지보수 비용, 결함의 가능성 여부 등을 예측하는데 사용되고 있다. 본 논문에서는 대표적인 객체지향 메트릭과 산업체의 실제 소프트웨어 개발 과정에서 발생하는 변경 발생 횟수와의 관계를 제시한다. 규모, 복잡도, 결합도, 상속과 다형성 측면에서 7개의 메트릭이 사용되었으며, .NET 플랫폼 기반의 정보 시스템의 개발 과정에서 변경 발생 횟수에 대한 자료를 수집하였다. 본 논문에서는 다중회귀분석 기법을 이용하여 사용된 객체지향 메트릭으로부터 변경 발생횟수를 예측하는 모형을 제시한다.

Layer 3 이더넷 스위치 성능 시험 방법론 연구 (A Methodology for Performance Testing of Ethernet Switch)

  • 김용선
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(1)
    • /
    • pp.441-444
    • /
    • 2000
  • This paper covers the performance testing for layer 3 Ethernet switch based on various methodologies by which we can measure essential metrics such as throughput, latency, frame loss rate, and back to back frames. In the first place, layer 2 and layer 3 switch evolution is introduced followed by description of IP packet switching in layer 3 switch. And then, the above test metrics and test methodologies are illustrated as well. At last, we conduct the performance testing for layer 3 switch in case of transmitting packets of 64, 128, 256, 512, 1024, 1280, and 1518 byte size and analyze then results.

  • PDF

Performance and Energy Consumption Analysis of 802.11 with FEC Codes over Wireless Sensor Networks

  • Ahn, Jong-Suk;Yoon, Jong-Hyuk;Lee, Kang-Woo
    • Journal of Communications and Networks
    • /
    • 제9권3호
    • /
    • pp.265-273
    • /
    • 2007
  • This paper expands an analytical performance model of 802.11 to accurately estimate throughput and energy demand of 802.11-based wireless sensor network (WSN) when sensor nodes employ Reed-Solomon (RS) codes, one of block forward error correction (FEC) techniques. This model evaluates these two metrics as a function of the channel bit error rate (BER) and the RS symbol size. Since the basic recovery unit of RS codes is a symbol not a bit, the symbol size affects the WSN performance even if each packet carries the same amount of FEC check bits. The larger size is more effective to recover long-lasting error bursts although it increases the computational complexity of encoding and decoding RS codes. For applying the extended model to WSNs, this paper collects traffic traces from a WSN consisting of two TIP50CM sensor nodes and measures its energy consumption for processing RS codes. Based on traces, it approximates WSN channels with Gilbert models. The computational analyses confirm that the adoption of RS codes in 802.11 significantly improves its throughput and energy efficiency of WSNs with a high BER. They also predict that the choice of an appropriate RS symbol size causes a lot of difference in throughput and power waste over short-term durations while the symbol size rarely affects the long-term average of these metrics.

Mapping Particle Size Distributions into Predictions of Properties for Powder Metal Compacts

  • German, Randall M.
    • 한국분말야금학회:학술대회논문집
    • /
    • 한국분말야금학회 2006년도 Extended Abstracts of 2006 POWDER METALLURGY World Congress Part2
    • /
    • pp.704-705
    • /
    • 2006
  • Discrete element analysis is used to map various log-normal particle size distributions into measures of the in-sphere pore size distribution. Combinations evaluated range from monosized spheres to include bimodal mixtures and various log-normal distributions. The latter proves most useful in providing a mapping of one distribution into the other (knowing the particle size distribution we want to predict the pore size distribution). Such metrics show predictions where the presence of large pores is anticipated that need to be avoided to ensure high sintered properties.

  • PDF

Counting What Will Count: How to Empirically Select Leading Performance Indicator

  • Pauwels, Koen;Joshi, Amit
    • 아태비즈니스연구
    • /
    • 제2권2호
    • /
    • pp.1-35
    • /
    • 2011
  • Facing information overload in today's complex environments, managers look to a concise set of marketing metrics to provide direction for marketing decision making. While there have been several papers dealing with the theoretical aspects of dashboard creation, no research creates and tests a dashboard using scientific techniques. This study develops and demonstrates an empirical approach to dashboard metric selection. In a fast moving consumer goods category, this research selects leading indicators for national-brand and store-brand sales and revenue premium performance from 99 brand-specific and relative-to-competition variables including price, brand equity, usage occasions, and multiple measures of awareness, trial/usage, purchase intent, and liking/satisfaction. Plotting impact size and wear-in time reveals that different kinds of variables predict sales at distinct lead times, which implies that managerial action may be taken to turn the metrics around before performance itself declines.

  • PDF