• Title/Summary/Keyword: Code Metrics

Search Result 76, Processing Time 0.029 seconds

A Study of Estimation for Web Application Complexity (웹 어플리케이션의 복잡도 예측에 관한 연구)

  • Oh Sung-Kyun;Kim Mi-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.27-34
    • /
    • 2004
  • As software developing paradigm has been changing to complicate Web environment, study of complexity becomes vigorous. Yet still it seems that general agreement has not to be reached to architecture or complexity measure of Web application. And so traditional complexity metrics - program size(LOC) and Cyclomatic Complexity can be derived from the source code after implementation. it is not helpful to the early phase of software development life cycle - analysis and design phase. In this study 6 Web projects has been used for deriving applications with possible errors suited by Complexity Indicator. Using 61 programs derived, linear correlation between complexity, number of classes and number of methods has been proposed. As Web application complexity could be estimated before implementation, effort and cost management will be processed more effectively.

  • PDF

Efficiency Measurement Method and Simplification of Program (프로그램의 효율성 측정 방법과 간소화)

  • Yang, Hae-Sool
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1998
  • Softwares which have many functions to satisfy user's requirements is developing. But generally, users use partial functions of software. If we could construct software which leave useful functions and remove unuseful functions in software with many functions, we could enhance execution efficiency by reduction of program size and quality of software. There are 6 items in international standard ISO/IEC 9126 about quality of software. These are functionality, reliability, usability, efficiency, maintenance and portability. In this study, we proposed metrics for measurement of efficiency and simplification method for source code. And we described products evaluation result and indicated problem and progress method for practical development project about proposed efficiency metrics.

  • PDF

An efficient punctured-coded TCM for the mobile satellite channel (이동 위성 채널에서 효율적인 Punctured TCM 방식)

  • 박성경;김종일;홍성권;주판유;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.8
    • /
    • pp.2063-2076
    • /
    • 1996
  • In this thesis, in order to apply a punctured convolutional codes to the trellis coded modulation(TCM), an efficient punctured trellis coded modulation(PTCM) based on the decomposition of the metric into orthogonal components is presented. Also, a simulation is performed in an additive white Gaussian noise(AWGN) and a rician fading channel modeling the mobile satellite channel. The PTCM combines punctured convolutional coding with MPSK modulation to provide a large coding gain in a power-limited or bandwidth-limited channel. However, in general the use of the punctured convolutional code structure in the decoder results in a performance loss in comparison to trellis codes, due to difficulties in assigning metrics. But, the study shows no loss in performance for punctured trellis coded MPSK in comparison to TCM, and what is more, the punctured convolutional codes results in some savings in the complexity of Viterbi decoders, compared to TCM of the same rate. Also, the results shows that the punctured trellis coded .pi./8 shift 8PSK is an attractive scheme for power-limited and band-limited systems and especially, the Viterbi decoder with first and Lth phase difference metrics improves BER performance by the mobile satellite channel.

  • PDF

A Half-Rate Space-Frequency Coded OFDM with Dual Viterbi Decoder (이중 Viterbi 복호기를 가지는 반율 공간-주파수 부호화된 직교 주파수분할다중화)

  • Kang Seog-Geun
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.75-82
    • /
    • 2006
  • In this paper, a space-frequency coded orthogonal frequency division multiplexing (SFC-OFDM) scheme with dual Viterbi decoder is proposed and analyzed. Here, two independent half-rate OFDM symbols are generated after convolutional coding of the binary source code. A dual Viterbi decoder is exploited to decode the demodulated sequences independently in the receiver, and their path metrics are compared. Accordingly, the recovered binary data in the proposed scheme are composed of the combination of the sequences having larger path metrics while those in a conventional system are simply the output of single Viterbi decoder. As a result, the proposed SFC-OFDM scheme has a better performance than the conventional one for all signal-to-noise power ratio.

The Experimental Comparison of Fault Detection Efficiency of Static Code Analysis Tools for Software RAMS (소프트웨어 RAMS를 위한 정적기법을 이용한 코드 결함 검출 효율성에 관한 실험적 비교)

  • Jang, Jeong-Hoon;Yun, Cha-Jung;Jang, Ju-Su;Lee, Won-Taek;Lee, Eun-Kyu
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.2493-2502
    • /
    • 2011
  • For Static analysis of software code, an experienced tester prefer detecting defects with using selective static technique. Many cases of static method have been reported such as coding rules, software metrics, defect data, etc. However, many of analysis case only present effectiveness of static analysis, not enough description for how the tester judged to classify code defects used in code analysis and removed them properly for ensure high quality. Occasionally, there are materials to show the effect of through some examples through some examples. But difficult to gain trust, because of not enough detail for application process. In this paper, introduced the static technique commonly used in railway and applied to the real development challenges. And the each of results were compared and analyzed. It is hard to generalize the results of this parer. But can be used and referenced as a case of study.

  • PDF

Criticality benchmarking of ENDF/B-VIII.0 and JEFF-3.3 neutron data libraries with RMC code

  • Zheng, Lei;Huang, Shanfang;Wang, Kan
    • Nuclear Engineering and Technology
    • /
    • v.52 no.9
    • /
    • pp.1917-1925
    • /
    • 2020
  • New versions of ENDF/B and JEFF data libraries have been released during the past two years with significant updates in the neutron reaction sublibrary and the thermal neutron scattering sublibrary. In order to get a more comprehensive impression of the criticality quality of these two latest neutron data libraries, and to provide reference for the selection of the evaluated nuclear data libraries for the science and engineering applications of the Reactor Monte Carlo code RMC, the criticality benchmarking of the two latest neutron data libraries has been performed. RMC was employed as the computational tools, whose processing capability for the continuous representation ENDF/B-VIII.0 thermal neutron scattering laws was developed. The RMC criticality validation suite consisting of 116 benchmarks was established for the benchmarking work. The latest ACE format data libraries of the neutron reaction and the thermal neutron scattering laws for ENDF/B-VIII.0, ENDF/B-VII.1, and JEFF-3.3 were downloaded from the corresponding official sites. The ENDF/B-VII.0 data library was also employed to provide code-to-code validation for RMC. All the calculations for the four different data libraries were performed by using a parallel version of RMC, and all the calculated standard deviations are lower than 30pcm. Comprehensive analyses including the C/E values with uncertainties, the δk/σ values, and the metrics of χ2 and < |Δ| >, were conducted and presented. The calculated keff eigenvalues based on the four data libraries generally agree well with the benchmark evaluations for most cases. Among the 116 criticality benchmarks, the numbers of the calculated keff eigenvalues which agree with the benchmark evaluations within 3σ interval (with a confidence level of 99.6%) are 107, 109, 112, and 113 for ENDF/B-VII.0, ENDF/B-VII.1, ENDF/B-VIII.0 and JEFF-3.3, respectively. The present results indicate that the ENDF/B-VIII.0 neutron data library has a better performance on average.

A Study of Estimation for Web Software Size (웹 소프트웨어 규모 예측에 관한 연구)

  • KIM JeeHyun;YOO HaeYoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.409-416
    • /
    • 2005
  • Even though development paradigm of software has been changing very fast at the beginning of 21st Centuries, there are just few studies of quality and estimation metrics appropriate for Web environment. So in this study after analyzing the correlation between the size of the final code and property of objects, three industrial real world projects written in ASP have been used for deriving programs with high possibilities of faults. And then the size of programs was analyzed to correlate with the number of classes or the number of methods through linear regression. Among the web software with the complex architecture or server, client and un, type or form file written in Javascript for client has the high correlation and the number of methods is well correlated with the size of final code.

Secure SLA Management Using Smart Contracts for SDN-Enabled WSN

  • Emre Karakoc;Celal Ceken
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3003-3029
    • /
    • 2023
  • The rapid evolution of the IoT has paved the way for new opportunities in smart city domains, including e-health, smart homes, and precision agriculture. However, this proliferation of services demands effective SLAs between customers and service providers, especially for critical services. Difficulties arise in maintaining the integrity of such agreements, especially in vulnerable wireless environments. This study proposes a novel SLA management model that uses an SDN-Enabled WSN consisting of wireless nodes to interact with smart contracts in a straightforward manner. The proposed model ensures the persistence of network metrics and SLA provisions through smart contracts, eliminating the need for intermediaries to audit payment and compensation procedures. The reliability and verifiability of the data prevents doubts from the contracting parties. To meet the high-performance requirements of the blockchain in the proposed model, low-cost algorithms have been developed for implementing blockchain technology in wireless sensor networks with low-energy and low-capacity nodes. Furthermore, a cryptographic signature control code is generated by wireless nodes using the in-memory private key and the dynamic random key from the smart contract at runtime to prevent tampering with data transmitted over the network. This control code enables the verification of end-to-end data signatures. The efficient generation of dynamic keys at runtime is ensured by the flexible and high-performance infrastructure of the SDN architecture.

A Method for Measuring and Evaluating for Block-based Programming Code (블록기반 프로그래밍 코드의 수준 및 취약수준 측정방안)

  • Sohn, Wonsung
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.3
    • /
    • pp.293-302
    • /
    • 2016
  • It is the latest fashion of interesting with software education in public school environment and also consider as high priority issue of curriculum for college freshman with programming 101 courses. The block-based programming tool is used widely for the beginner and provides several positive features compare than text-based programming language tools. To measure quality of programming code elaborately which is based script language, it is need to very tough manual process. As a result the previously research related with evaluation of block-based script code has been focused very simple methods in which normalize the number of blocks used which is related with programming concept. In such cases in this, it is difficult to measure structural vulnerability of script code and implicit programming concept which does not expose. In this research, the framework is proposed which enable to measure and evaluate quality of code script of block-based programming tools and also provides method to find of vulnerability of script code. In this framework, the quality metrics is constructed to structuralize implicit programming concept and then developed the quality measure and vulnerability model of script to improve level of programming. Consequently, the proposed methods enable to check of level of programming and predict the heuristic target level.

A study on Quality Metrics of Reusable Classes Candidate (재사용가능한 클래스 후보자들의 품질 메트릭들에 관한 연구)

  • Kim, Jae-Saeng;Song, Yeong-Jae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.107-117
    • /
    • 1997
  • It is use in many researches that the s/w quality evaluation evaluates the developing system or the developed system, updates the problems and selects the reusable components from source code. In this paper, we propose the objective metric functions which can evaluate the reusability of candidates classes with the KHR system[11] and select a proper candidate. The quantitative quality we proposed have merits to compare and to evaluate the reusable candidates classes.

  • PDF