• Title/Summary/Keyword: Productivity Metrics

Search Result 39, Processing Time 0.028 seconds

A Study on Reusability Metric of Framework for Embedded Software (임베디드 소프트웨어를 위한 프레임워크의 재사용성 메트릭에 관한 연구)

  • Cho, Eun-Sook;Kim, Chul-Jin;Lee, Sook-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.11
    • /
    • pp.5252-5259
    • /
    • 2011
  • Both Optimization and Reuse Technology are considered as core technologies handling the values of products in embedded software. Framework technology is a typical type of optimization and reuse technology. When we develop software based on framework, The effect of reusability as well as development productivity can be improved. However, currently the form of framework-based development is very poor in embedded software development. Furthermore, because framework development is also beginning stage in embedded software development, there are questions whether developing framework can bring reusability effect. In this paper, we propose metrics measuring reusability of framework which is designed for improving reusability of embedded software. As as result of applying proposed metrics into real design cases, we can obtain more effective results in framework-based design than existing design.

Formalization of Productivity Metrics for Equipment in Multi-sectioned Road Construction Projects (다(多)공구 도로 공사 현장 장비들의 운영 실태 파악을 위한 생산성 지표 정립에 관한 연구)

  • Kim, Hong-Yeul;Koo, Bon-Sang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.13 no.4
    • /
    • pp.100-109
    • /
    • 2012
  • Large road construction projects are typically partitioned into sections that are then contracted individually to contractors. Each section requires using similar heavy equipment including excavators, dump trucks and pavers, which constitute the highest cost. Normally the equipment is not shared between them, as each contractor wishes to have their equipment readily available. However, such practices result in very low utilization of these equipment. The goal of this research is to develop a programmatic resource sharing system in which contractors can share equipment depending on the changing needs of a multi-sectioned road project. This paper introduces the results of a survey performed to investigate how contractors currently manage the supply and demand of equipment and the equipment that are practical for sharing across a project. More importantly, the paper describes a set of metrics (DPR, nDPR, SDI) needed to quantify the amount of supply/demand variance occurring in each section. The metrics were used on an actual road construction project, and the results show that each section suffers from an imbalance between its monthly planned and actual utilization of equipment. The results also indicate that the sharing of the equipment can lead to potentially large savings as equipment requirements can be met within a project as to short leasing from outside vendors.

Machine Learning Algorithm for Estimating Ink Usage (머신러닝을 통한 잉크 필요량 예측 알고리즘)

  • Se Wook Kwon;Young Joo Hyun;Hyun Chul Tae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.23-31
    • /
    • 2023
  • Research and interest in sustainable printing are increasing in the packaging printing industry. Currently, predicting the amount of ink required for each work is based on the experience and intuition of field workers. Suppose the amount of ink produced is more than necessary. In this case, the rest of the ink cannot be reused and is discarded, adversely affecting the company's productivity and environment. Nowadays, machine learning models can be used to figure out this problem. This study compares the ink usage prediction machine learning models. A simple linear regression model, Multiple Regression Analysis, cannot reflect the nonlinear relationship between the variables required for packaging printing, so there is a limit to accurately predicting the amount of ink needed. This study has established various prediction models which are based on CART (Classification and Regression Tree), such as Decision Tree, Random Forest, Gradient Boosting Machine, and XGBoost. The accuracy of the models is determined by the K-fold cross-validation. Error metrics such as root mean squared error, mean absolute error, and R-squared are employed to evaluate estimation models' correctness. Among these models, XGBoost model has the highest prediction accuracy and can reduce 2134 (g) of wasted ink for each work. Thus, this study motivates machine learning's potential to help advance productivity and protect the environment.

Design Driven Testing on Adaptive Use Case Approach for Real Time System (실시간 시스템을 위한 어댑티브 유스 케이스 방법상의 디자인 지향 테스트)

  • Kim Young-Chul;Joo Bok-Gyu
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.1-11
    • /
    • 2003
  • This paper is introduced about Design driven testing, for real time system, based on use case approaches, We focuses on a part of an extended use case approach for real time software development, which partitions design schema into layered design component architecture of functional components called "design component", We developed a use case action matrix to contain a collection of related scenarios each describing a specific variant of an executable sequence of use case action units, which reflected the behavioral properties of the real time system design, in this paper, we attempt to apply real time system with design driven testing with test plan metrics which is introduced which produces an ordering of this scenario set to enhance productivity and both promote and capitalize on test case reusability of existing scenarios.scenarios.

  • PDF

A System Level Network-on-chip Model with MLDesigner

  • Agarwal, Ankur;Shankar, Rabi;Pandya, A.S.;Lho, Young-Uhg
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.2
    • /
    • pp.122-128
    • /
    • 2008
  • Multiprocessor architectures and platforms, such as, a multiprocessor system on chip (MPSoC) recently introduced to extend the applicability of the Moore's law, depend upon concurrency and synchronization in both software and hardware to enhance design productivity and system performance. With the rapidly approaching billion transistors era, some of the main problem in deep sub-micron technologies characterized by gate lengths in the range of 60-90 nm will arise from non scalable wire delays, errors in signal integrity and non-synchronized communication. These problems may be addressed by the use of Network on Chip (NOC) architecture for future System-on-Chip (SoC). We have modeled a concurrent architecture for a customizable and scalable NOC in a system level modeling environment using MLDesigner (from MLD Inc.). Varying network loads under various traffic scenarios were applied to obtain realistic performance metrics. We provide the simulation results for latency as a function of the buffer size. We have abstracted the area results for NOC components from its FPGA implementation. Modeled NOC architecture supports three different levels of quality-of-service (QoS).

Which Code Changes Should You Review First?: A Code Review Tool to Summarize and Prioritize Important Software Changes

  • Song, Myoungkyu;Kwon, Young-Woo
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.255-262
    • /
    • 2017
  • In recent software development, repetitive code fragments (i.e., clones) are common due to the copy-and-paste programming practice, the framework-based development, or the reuse of same design patterns. Such similar code fragments are likely to introduce more bugs but are easily disregarded by a code reviewer or a programmer. In this paper, we present a code review tool to help code reviewers identify important code changes written by other programmers and recommend which changes need to be reviewed first. Specifically, to identify important code changes, our approach detects code clones across revisions and investigates them. Then, to help a code reviewer, our approach ranks the identified changes in accordance with several software quality metrics and statistics on those clones and changes. Furthermore, our approach allows the code reviewer to express their preferences during code review time. As a result, the code reviewer who has little knowledge of a code base can reduce his or her effort by reviewing the most significant changes that require an instant attention. To evaluate our approach, we integrated our approach with a modern IDE (e.g., Eclipse) as a plugin and then analyzed two third-party open source projects. The experimental results indicate that our approach can improve code reviewer's productivity.

On the Quantitative Metrics of Software Reusability (소프트웨어 재사용가능성의 정략적 측도)

  • Jang, Hwa-Sik;Park, Man-Gon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.2
    • /
    • pp.176-184
    • /
    • 1995
  • The software reuse is a prospective way to improve software productivity and quality but not applied very well in practice, because there is no quantitative metric for software quality. In this paper we proposed the quantification of the reuse of software that we can measure the possibility of the reuse by applying the reuse assessment metric to the module after the extraction of a module from existing software. For measuring the module that can be reused, we divided the factors of quality by the generality, simplicity, maintainability and modularity, and identified and measured the module by the factors and finally decided the possibility of the software reuse. The advantage of the proposed metric is that we can find the inappropriate reuse of module exactly at the beginning by measuring quantitatively the module to be reused.

  • PDF

A Quality Model of National R&D Projects based on ISO/IEC Standards (ISO/IEC 국제표준에 기반한 국가연구개발사업 품질측정모델에 관한 연구)

  • Song, Byeong-Sun;Lee, Jae-Sung;Rhew, Sung-Yul;Lee, Nam-Yong
    • Journal of Information Technology Services
    • /
    • v.7 no.3
    • /
    • pp.31-45
    • /
    • 2008
  • The number of national research and development(R&D) projects have been continually increased, and it is necessary that the efficiency of investment of R&D projects have been improved during the last two decades. also the government has improved national R&D process and supporting tools for the projects. However, it is not enough to develop a quality model to evaluate the projects systematically. Therefore, a quality model for national R&D projects need to be revised. The quality model for the projects should be based on the global standard(ISO/IEC 9126). In this paper, the authors proposed a quality model for national R&D projects and took a case study for verification of the quality model. The quality model is defined in terms of functionality, reliability, usability, efficiency, portability and maintainability. Also, the authors suggest metrics for measuring the quality of products or services. finally, The productivity and quality of national R&D projects will be improved by the quality model.

Quantum Machine Learning: A Scientometric Assessment of Global Publications during 1999-2020

  • Dhawan, S.M.;Gupta, B.M.;Mamdapur, Ghouse Modin N.
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.11 no.3
    • /
    • pp.29-44
    • /
    • 2021
  • The study provides a quantitative and qualitative description of global research in the domain of quantum machine learning (QML) as a way to understand the status of global research in the subject at the global, national, institutional, and individual author level. The data for the study was sourced from the Scopus database for the period 1999-2020. The study analyzed global research output (1374 publications) and global citations (22434 citations) to measure research productivity and performance on metrics. In addition, the study carried out bibliometric mapping of the literature to visually represent network relationship between key countries, institutions, authors, and significant keyword in QML research. The study finds that the USA and China lead the world ranking in QML research, accounting for 32.46% and 22.56% share respectively in the global output. The top 25 global organizations and authors lead with 35.52% and 16.59% global share respectively. The study also tracks key research areas, key global players, most significant keywords, and most productive source journals. The study observes that QML research is gradually emerging as an interdisciplinary area of research in computer science, but the body of its literature that has appeared so far is very small and insignificant even though 22 years have passed since the appearance of its first publication. Certainly, QML as a research subject at present is at a nascent stage of its development.

Software Development for Optimal Productivity and Service Level Management in Ports (항만에서 최적 생산성 및 서비스 수준 관리를 위한 소프트웨어 개발)

  • Park, Sang-Kook
    • Journal of Navigation and Port Research
    • /
    • v.41 no.3
    • /
    • pp.137-148
    • /
    • 2017
  • Port service level is a metric of competitiveness among ports for the operating/managing bodies such as the terminal operation company (TOC), Port Authority, or the government, and is used as an important indicator for shipping companies and freight haulers when selecting a port. Considering the importance of metrics, we developed software to objectively define and manage six important service indicators exclusive to container and bulk terminals including: berth occupancy rate, ship's waiting ratio, berth throughput, number of berths, average number of vessels waiting, and average waiting time. We computed the six service indicators utilizing berth 1 through berth 5 in the container terminals and berth 1 through berth 4 in the bulk terminals. The software model allows easy computation of expected ship's waiting ratio over berth occupancy rate, berth throughput, counts of berth, average number of vessels waiting and average waiting time. Further, the software allows prediction of yearly throughput by utilizing a ship's waiting ratio and other productivity indicators and making calculations based on arrival patterns of ship traffic. As a result, a TOC is able to make strategic decisions on the trade-offs in the optimal operating level of the facility with better predictors of the service factors (ship's waiting ratio) and productivity factors (yearly throughput). Successful implementation of the software would attract more shipping companies and shippers and maximize TOC profits.