• Title/Summary/Keyword: Code Metrics

Search Result 76, Processing Time 0.026 seconds

A comparison of three performance-based seismic design methods for plane steel braced frames

  • Kalapodis, Nicos A.;Papagiannopoulos, George A.;Beskos, Dimitri E.
    • Earthquakes and Structures
    • /
    • v.18 no.1
    • /
    • pp.27-44
    • /
    • 2020
  • This work presents a comparison of three performance-based seismic design methods (PBSD) as applied to plane steel frames having eccentric braces (EBFs) and buckling restrained braces (BRBFs). The first method uses equivalent modal damping ratios (ξk), referring to an equivalent multi-degree-of-freedom (MDOF) linear system, which retains the mass, the elastic stiffness and responds in the same way as the original non-linear MDOF system. The second method employs modal strength reduction factors (${\bar{q}}_k$) resulting from the corresponding modal damping ratios. Contrary to the behavior factors of code based design methods, both ξk and ${\bar{q}}_k$ account for the first few modes of significance and incorporate target deformation metrics like inter-storey drift ratio (IDR) and local ductility as well as structural characteristics like structural natural period, and soil types. Explicit empirical expressions of ξk and ${\bar{q}}_k$, recently presented by the present authors elsewhere, are also provided here for reasons of completeness and easy reference. The third method, developed here by the authors, is based on a hybrid force/displacement (HFD) seismic design scheme, since it combines the force-base design (FBD) method with the displacement-based design (DBD) method. According to this method, seismic design is accomplished by using a behavior factor (qh), empirically expressed in terms of the global ductility of the frame, which takes into account both non-structural and structural deformation metrics. These expressions for qh are obtained through extensive parametric studies involving non-linear dynamic analysis (NLDA) of 98 frames, subjected to 100 far-fault ground motions that correspond to four soil types of Eurocode 8. Furthermore, these factors can be used in conjunction with an elastic acceleration design spectrum for seismic design purposes. Finally, a comparison among the above three seismic design methods and the Eurocode 8 method is conducted with the aid of non-linear dynamic analyses via representative numerical examples, involving plane steel EBFs and BRBFs.

A Software Complexity Measurement Technique for Object-Oriented Reverse Engineering (객체지향 역공학을 위한 소프트웨어 복잡도 측정 기법)

  • Kim Jongwan;Hwang Chong-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.847-852
    • /
    • 2005
  • Over the last decade, numerous complexity measurement techniques for Object-Oriented (OO) software system have been proposed for managing the effects of OO codes. These techniques may be based on source code analysis such as WMC (Weighted Methods per Class) and LCOM (Lack of Cohesion in Methods). The techniques are limited to count the number of functions (C++). However. we suggested a new weighted method that checks the number of parameters, the return value and its data type. Then we addressed an effective complexity measurement technique based on the weight of class interfaces to provide guidelines for measuring the class complexity of OO codes in reverse engineering. The results of this research show that the proposed complexity measurement technique ECC(Enhanced Class Complexity) is consistent and accurate in C++ environment.

Identification of Microservices to Develop Cloud-Native Applications (클라우드네이티브 애플리케이션 구축을 위한 마이크로서비스 식별 방법)

  • Choi, Okjoo;Kim, Yukyong
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.51-58
    • /
    • 2021
  • Microservices are not only developed independently, but can also be run and deployed independently, ensuring more flexible scaling and efficient collaboration in a cloud computing environment. This impact has led to a surge in migrating to microservices-oriented application environments in recent years. In order to introduce microservices, the problem of identifying microservice units in a single application built with a single architecture must first be solved. In this paper, we propose an algorithm-based approach to identify microservices from legacy systems. A graph is generated using the meta-information of the legacy code, and a microservice candidate is extracted by applying a clustering algorithm. Modularization quality is evaluated using metrics for the extracted microservice candidates. In addition, in order to validate the proposed method, candidate services are derived using codes of open software that are widely used for benchmarking, and the level of modularity is evaluated using metrics. It can be identified as a smaller unit of microservice, and as a result, the module quality has improved.

$\pi$/4 shift QPSK with Trellis-Code in Rayleigh Fading Channel (레일레이 페이딩 채널에서 Trellis 부호를 적용한 $\pi$/4 shift QPSK)

  • 김종일;이한섭;강창언
    • The Proceeding of the Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.3 no.2
    • /
    • pp.30-38
    • /
    • 1992
  • In this paper, in order to apply the $\pi$/4 shift QPSK to TCM, we propose the $\pi$/8 shift 8PSK modulation technique and the trellis-coded $\pi$/8 shift 8PSK performing signal set expansion and set partition by phase difference. In addition, the Viterbi decoder with branch metrics of the squared Euclidean distance of the first phase difference as well as the Lth phase difference is introduced in order to improve the bit error rate(BER) performance in differential detection of the trellis-coded $\pi$/8 shift 8 PSK. The proposed Viterbi decoder is conceptually the same as the sliding multiple de- tection by using the branch metric with first and Lth order phase difference. We investigate the performance of the uncoded .pi. /4 shift QPSK and the trellis-coded $\pi$/8 shift 8PSK with or without the Lth phase difference metric in an additive white Gaussian noise (AWGN) and Rayleigh fading channel using the Monte Carlo simulation. The study shows that the $\pi$/4 shift QPSK with the Trellis-code i. e. the trellis-coded $\pi$/8 shift 8PSK is an attractive scheme for power and bandlimited systems and especially, the Viterbi decoder with first and Lth phase difference metrics improves BER performance. Also, the next proposed algorithm can be used in the TC $\pi$/8 shift 8PSK as well as TC MDPSK.

  • PDF

Extracting the Source Code Context to Predict Import Changes using GPES

  • Lee, Jaekwon;Kim, Kisub;Lee, Yong-Hyeon;Hong, Jang-Eui;Seo, Young-Hoon;Yang, Byung-Do;Jung, Woosung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1234-1249
    • /
    • 2017
  • One of the difficulties developers encounter in maintaining tasks of a large-scale software system is the updating of suitable libraries on time. Developers tend to miss or make mistakes when searching for and choosing libraries during the development process, or there may not be a stable library for the developers to use. We present a novel approach for helping developers modify software easily and on time and avoid software failures. Using a tool previously built by us called GPES, we collected information of projects, such as abstract syntax trees, tokens, software metrics, relations, and evolutions, for our experiments. We analyzed the contexts of source codes in existing projects to predict changes automatically and to recommend suitable libraries for the projects. The collected data show that researchers can reduce the overall cost of data analysis by transforming the extracted data into the required input formats with a simple query-based implementation. Also, we manually evaluated how the extracted contexts are similar to the description and we found that a sufficient number of the words in the contexts is similar and it might help developers grasp the domain of the source codes easily.

Software Development Effort Estimation Using Neural Network Model (신경망을 이용한 소프트웨어 개발노력 추정)

  • Lee, Sang-Un
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.241-246
    • /
    • 2001
  • Area of software measurement in software engineering is active more than thirty years. There is a huge collection of researches but still no a concrete software cost estimation model. If we want to measure the cost-effort of a software project, we need to estimate the size of the software. A number of software metrics are identified in the literature ; the most frequently cited measures are LOC(line of code) and FPA(function point analysis). The FPA approach has features that overcome the major problems with using LOC as a measure of system size. This paper presents an neural networks(NN) models that related software development effort to software size measured in FPs and function element types. The research describes appropriate NN modeling in the context of a case study for 24 software development projects. Also, this paper compared the NN model with a regression analysis model and found the NN model has better estimative accuracy.

  • PDF

Software Development Effort Estimation Using Function Point (기능점수를 이용한 소프트웨어 개발노력 추정)

  • Lee, Sang-Un;Gang, Jeong-Ho;Park, Jung-Yang
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.603-612
    • /
    • 2002
  • Area of software measurement in software engineering is active more than thirty years. There is a huge collection of researches but still no concrete software development effort and cost estimation model. If we want to measure the effort and cost of a software project, we need to estimate the size of the software. A number of software metrics are identified in the literature; the most frequently cited measures are LOC (line of code) and FPA (function point analysis). The FPA approach has features that overcome the major problems with using LOC as a measure of system size. This paper presents simple linear regression model that related software development effort to software size measured in FP. The model is derived from the plotting of the effort and FP relation. The experimental data are collected from 789 software development projects that were recently developed under the various development environments and development methods. Also, the model is compare with other regression analysis model. The presented model has the best estimation ability among the software effort estimation models.

EDF: An Interactive Tool for Event Log Generation for Enabling Process Mining in Small and Medium-sized Enterprises

  • Frans Prathama;Seokrae Won;Iq Reviessay Pulshashi;Riska Asriana Sutrisnowati
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.101-112
    • /
    • 2024
  • In this paper, we present EDF (Event Data Factory), an interactive tool designed to assist event log generation for process mining. EDF integrates various data connectors to improve its capability to assist users in connecting to diverse data sources. Our tool employs low-code/no-code technology, along with graph-based visualization, to help non-expert users understand process flow and enhance the user experience. By utilizing metadata information, EDF allows users to efficiently generate an event log containing case, activity, and timestamp attributes. Through log quality metrics, our tool enables users to assess the generated event log quality. We implement EDF under a cloud-based architecture and run a performance evaluation. Our case study and results demonstrate the usability and applicability of EDF. Finally, an observational study confirms that EDF is easy to use and beneficial, expanding small and medium-sized enterprises' (SMEs) access to process mining applications.

Analysis of Ammunition Inspection Record Data and Development of Ammunition Condition Code Classification Model (탄약검사기록 데이터 분석 및 탄약상태기호 분류 모델 개발)

  • Young-Jin Jung;Ji-Soo Hong;Sol-Ip Kim;Sung-Woo Kang
    • Journal of the Korea Safety Management & Science
    • /
    • v.26 no.2
    • /
    • pp.23-31
    • /
    • 2024
  • In the military, ammunition and explosives stored and managed can cause serious damage if mishandled, thus securing safety through the utilization of ammunition reliability data is necessary. In this study, exploratory data analysis of ammunition inspection records data is conducted to extract reliability information of stored ammunition and to predict the ammunition condition code, which represents the lifespan information of the ammunition. This study consists of three stages: ammunition inspection record data collection and preprocessing, exploratory data analysis, and classification of ammunition condition codes. For the classification of ammunition condition codes, five models based on boosting algorithms are employed (AdaBoost, GBM, XGBoost, LightGBM, CatBoost). The most superior model is selected based on the performance metrics of the model, including Accuracy, Precision, Recall, and F1-score. The ammunition in this study was primarily produced from the 1980s to the 1990s, with a trend of increased inspection volume in the early stages of production and around 30 years after production. Pre-issue inspections (PII) were predominantly conducted, and there was a tendency for the grade of ammunition condition codes to decrease as the storage period increased. The classification of ammunition condition codes showed that the CatBoost model exhibited the most superior performance, with an Accuracy of 93% and an F1-score of 93%. This study emphasizes the safety and reliability of ammunition and proposes a model for classifying ammunition condition codes by analyzing ammunition inspection record data. This model can serve as a tool to assist ammunition inspectors and is expected to enhance not only the safety of ammunition but also the efficiency of ammunition storage management.

Performance Analysis of MAP Algorithm by Robust Equalization Techniques in Nongaussian Noise Channel (비가우시안 잡음 채널에서 Robust 등화기법을 이용한 터보 부호의 MAP 알고리즘 성능분석)

  • 소성열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1290-1298
    • /
    • 2000
  • Turbo Code decoder is an iterate decoding technology, which extracts extrinsic information from the bit to be decoded by calculating both forward and backward metrics, and uses the information to the next decoding step Turbo Code shows excellent performance, approaching Shannon Limit at the view of BER, when the size of Interleaver is big and iterate decoding is run enough. But it has the problems which are increased complexity and delay and difficulty of real-time processing due to Interleaver and iterate decoding. In this paper, it is analyzed that MAP(maximum a posteriori) algorithm which is used as one of Turbo Code decoding, and the factor which determines its performance. MAP algorithm proceeds iterate decoding by determining soft decision value through the environment and transition probability between all adjacent bits and received symbols. Therefore, to improve the performance of MAP algorithm, the trust between adjacent received symbols must be ensured. However, MAP algorithm itself, can not do any action for ensuring so the conclusion is that it is needed more algorithm, so to decrease iterate decoding. Consequently, MAP algorithm and Turbo Code performance are analyzed in the nongaussian channel applying Robust equalization technique in order to input more trusted information into MAP algorithm for the received symbols.

  • PDF