• Title/Summary/Keyword: class complexity metrics

Search Result 16, Processing Time 0.02 seconds

The Complexity of Object-Oriented Systems by Analyzing the Class Diagram of UML (UML 클래스 다이어그램 분석에 의한 객체지향 시스템의 복잡도 연구)

  • Chung, Hong;Kim, Tae-Sik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.6
    • /
    • pp.780-787
    • /
    • 2005
  • Many researches and validations for the complexity metrics of the object-oriented systems have been studied. Most of them are aimed for the measurement of the partial aspects of the systems, for example, the coupling between objects, the complexity of inheritance structures, the cohesion of methods, and so on. But the software practitioners want to measure the complexity of overall system, not partial. We studied the complexity of the overall structures of object-oriented systems by analyzing the class diagram of UML. The class diagram is composed of classes and their relations. There are three kinds of relations, association, generalization, and aggregation, which are making the structure of object-oriented systems to be difficult to understand. We proposed a heuristic metric to measure the complexity of object-oriented systems by putting together the three kinds of the relations. Tn analyze the complexity of the structure of a object-oriented system for the maintainability of the system, we measured the degree of understandability of it, the reverse engineering time to draw a class diagram from the source codes, and the number of errors in the diagram. The results of this experiment shows that our proposed metric has a considerable relationship with the complexity of object-oriented systems. The metric will be helpful to the software developers for their designing tasks by evaluating the complexity of the structures of object-oriented systems and redesigning tasks , of them for the future maintainability.

Constructing an Open Source Based Software System for Reusable Module Extraction (재사용 모듈 추출을 위한 오픈 소스 기반 소프트웨어 시스템 구축)

  • Byun, Eun Young;Park, Bokyung;Jang, Woosung;Kim, R. Young Chul;Son, Hyun Seung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.9
    • /
    • pp.535-541
    • /
    • 2017
  • Today, the scale of the computer software market has increased, and massive sized software has been developed to satisfy diverse requirements. In this context, software complexity is increasing and the quality of software is becoming more difficult to manage. In particular, software reuse is important for the improvement of the environments of legacy systems and new system development. In this paper, we propose a method to reuse modules that are certified by quality. Reusable levels are divided into code area (method, class, and component), project domain, and business levels. Based on the coupling and cohesion of software complexity, we propose a reusable module extraction mechanism with reusability metrics, which constructs a visualization of the "reusable module's chunk" based on the method and class levels. By applying reverse engineering to legacy projects, it is possible to identify reusable modules/objects/chunks. If these modules/objects/chunks are to be reused to develop an extension system or similar new system, we need to ensure software reliability in order to reduce the time and cost of software development.

An Improved method of Two Stage Linear Discriminant Analysis

  • Chen, Yarui;Tao, Xin;Xiong, Congcong;Yang, Jucheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1243-1263
    • /
    • 2018
  • The two-stage linear discrimination analysis (TSLDA) is a feature extraction technique to solve the small size sample problem in the field of image recognition. The TSLDA has retained all subspace information of the between-class scatter and within-class scatter. However, the feature information in the four subspaces may not be entirely beneficial for classification, and the regularization procedure for eliminating singular metrics in TSLDA has higher time complexity. In order to address these drawbacks, this paper proposes an improved two-stage linear discriminant analysis (Improved TSLDA). The Improved TSLDA proposes a selection and compression method to extract superior feature information from the four subspaces to constitute optimal projection space, where it defines a single Fisher criterion to measure the importance of single feature vector. Meanwhile, Improved TSLDA also applies an approximation matrix method to eliminate the singular matrices and reduce its time complexity. This paper presents comparative experiments on five face databases and one handwritten digit database to validate the effectiveness of the Improved TSLDA.

Study on Fault Diagnosis and Data Processing Techniques for Substrate Transfer Robots Using Vibration Sensor Data

  • MD Saiful Islam;Mi-Jin Kim;Kyo-Mun Ku;Hyo-Young Kim;Kihyun Kim
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.45-53
    • /
    • 2024
  • The maintenance of semiconductor equipment is crucial for the continuous growth of the semiconductor market. System management is imperative given the anticipated increase in the capacity and complexity of industrial equipment. Ensuring optimal operation of manufacturing processes is essential to maintaining a steady supply of numerous parts. Particularly, monitoring the status of substrate transfer robots, which play a central role in these processes, is crucial. Diagnosing failures of their major components is vital for preventive maintenance. Fault diagnosis methods can be broadly categorized into physics-based and data-driven approaches. This study focuses on data-driven fault diagnosis methods due to the limitations of physics-based approaches. We propose a methodology for data acquisition and preprocessing for robot fault diagnosis. Data is gathered from vibration sensors, and the data preprocessing method is applied to the vibration signals. Subsequently, the dataset is trained using Gradient Tree-based XGBoost machine learning classification algorithms. The effectiveness of the proposed model is validated through performance evaluation metrics, including accuracy, F1 score, and confusion matrix. The XGBoost classifiers achieve an accuracy of approximately 92.76% and an equivalent F1 score. ROC curves indicate exceptional performance in class discrimination, with 100% discrimination for the normal class and 98% discrimination for abnormal classes.

2-Polling Feedback Scheme for Stable Reliable Broadcast in CSMA Wireless Networks (CSMA 무선 네트워크에서 안정성 있는 신뢰적 브로드캐스트를 위한 2-폴링 피드백 방법)

  • Yoon, Wonyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37B no.12
    • /
    • pp.1208-1218
    • /
    • 2012
  • Disseminating broadcast information stably and reliably in IEEE 802.11-like CSMA wireless networks requires that a source should seek collision-free transmission to multiple receivers and keep track of the reception state of the multiple receivers. We propose a simple yet efficient feedback scheme for stable reliable broadcast in wireless networks, called 2-polling feedback, where the state of two receivers are checked by a source before its broadcast transmission attempt We present a performance analysis of the class of reliable broadcast feedback schemes in terms of two performance metrics (packet transmission delay and packet stable time). The analysis results show that the proposed 2-polling feedback scheme outperforms the current existing classes of feedback schemes in the literature, i.e., all-polling feedback and 1-polling feedback. The 2-polling feedback scheme has lower asymptotic complexity than the all-polling feedback, and has the same asymptotic complexity as the 1-polling feedback but exhibits almost 50 % reduction in packet stable time.

A study on the Effect of Big Data Quality on Corporate Management Performance (빅데이터 품질이 기업의 경영성과에 미치는 영향에 관한 연구)

  • Lee, Choong-Hyong;Kim, YoungJun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.245-256
    • /
    • 2021
  • The Fourth Industrial Revolution brought the quantitative value of data across the industry and entered the era of 'Big Data'. This is due to both the rapid development of information & communication technology and the diversity & complexity of customer purchasing tendencies. An enterprise's core competence in the Big Data Era is to analyze and utilize the data to make strategic decisions for enterprise. However, most of traditional studies on Big Data have focused on technical issues and future potential values. In addition, these studies lacked interest in managing the quality and utilization levels of internal & external customer Big Data held by the entity. To overcome these shortages, this study attempted to derive influential factors by recognizing the quality management information systems and quality management of the internal & external Big Data. First of all, we conducted a survey of 204 executives & employees to determine whether Big Data quality management, Big Data utilization, and level management have a significant impact on corporate work efficiency & corporate management performance. For the study for this purpose, hypotheses were established, and their verifications were carried out. As a result of these studies, we found that the reasons that significantly affect corporate management performance are support from the management class, individual innovation, changes in the management environment, Big Data quality utilization metrics, and Big Data governance system.