• Title/Summary/Keyword: Software Complexity

Search Result 743, Processing Time 0.026 seconds

Classifying a Strength of Dependency between classes by using Software Metrics and Machine Learning in Object-Oriented System (기계학습과 품질 메트릭을 활용한 객체간 링크결합강도 분류에 관한 연구)

  • Jung, Sungkyun;Ahn, Jaegyoon;Yeu, Yunku;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.651-660
    • /
    • 2013
  • Object oriented design brought up improvement of productivity and software quality by adopting some concepts such as inheritance and encapsulation. However, both the number of software's classes and object couplings are increasing as the software volume is becoming larger. The object coupling between classes is closely related with software complexity, and high complexity causes decreasing software quality. In order to solve the object coupling issue, IT-field researchers adopt a component based development and software quality metrics. The component based development requires explicit representation of dependencies between classes and the software quality metrics evaluates quality of software. As part of the research, we intend to gain a basic data that will be used on decomposing software. We focused on properties of the linkage between classes rather than previous studies evaluated and accumulated the qualities of individual classes. Our method exploits machine learning technique to analyze the properties of linkage and predict the strength of dependency between classes, as a new perspective on analyzing software property.

A Structural Complexity Metric for Web Application based on Similarity (유사도 기반의 웹 어플리케이션 구조 복잡도)

  • Jung, Woo-Sung;Lee, Eun-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.8
    • /
    • pp.117-126
    • /
    • 2010
  • Software complexity is used to evaluate a target system's maintainability. The existing complexity metrics on web applications are count-based, so it is hard to incorporate the understandability of developers or maintainers. To make up for this shortcomings, entropy-theory can be applied to define complexity, however, it is assumed that information quantity of each paper is identical. In this paper, structural complexity of a web application is defined based on information theory and similarity. In detail, the proposed complexity is defined using entropy as the previous approach, but the information quantity of individual pages is defined using similarity. That is, a page which are similar with many pages has smaller information quantity than a page which are dissimilar to others. Furthermore, various similarity measures can be used for various views, which results in many-sided complexity measures. Finally, several complexity properties are applied to verify the proposed metric and case studies shows the applicability of the metric.

A Study on the Complexity Measurement of Architecture Assets (아키텍처 자산의 복잡도 측정에 관한 연구)

  • Choi, Han-Yong
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.5
    • /
    • pp.111-116
    • /
    • 2017
  • In this paper, we propose a method to measure the complexity of assets when a software component is constructed as a basic asset, a standardized design model is acquired, and a reusable extended asset is designed based on the standardized design model. However, each asset of our proposed asset management system consists of composite assets that combine assets of two domains. So this method can not make accurate measurements. Therefore, the complexity of the overall asset can be measured by reflecting the property value of the basic asset stored under the architecture. In conclusion, it is possible to measure the composite-complexity of a composed asset that is inversely proportional to cohesion and proportional to the cumulative sum of the associated values of each asset in the asset-related design.

A Study of Complexity in the Interior Space by Nigel Coats (Nigel Coats의 실내공간에 나타난 복잡성에 관한 연구)

  • 문정묵
    • Korean Institute of Interior Design Journal
    • /
    • no.18
    • /
    • pp.110-116
    • /
    • 1999
  • 인류의 역사가 시작된 이후, 인간의 창조는 자연의 모방이라는 큰 울타리 안에서 이루어져 왔다. 신의 작품이 위대한 자연이라면, 인간의 작품은 극히 단순한 그것의 모방에 불과한 것인데, 인간은 인간의 제한된 두뇌 활동으로 이해하기가 불가능한 복잡한 자연을 인식하기 위하여 단순화(simplification)라는 방법을 사용하여 왔다. 이 과정에서 기하학(geometry)은 극도로 발전하게 되며, 인간의 자연 인식을 위한 보편적인 수단으로 자리잡게 된다. 중요한 사실은 기하학이 수단으로서 뿐만 아니라 인간에 의한 창조의 목표로서 위치하게 되었다는 사실이다. 즉 기하학은 자연의 모방이전에 이미 인간의 상상력을 지배해왔고, 그것은 가장 보편적인 창조원리가 되어왔다는 점이다. 그러나 최근의 과학과 기술의 발전, 특히 컴퓨터 기술의 발전으로 그 복잡한 자연은 단순화의 과정을 거치지 않은 복잡한(complex) 상태로 인간에게 이해되어지기 시작했다. 그중 하나가 19세기에 시작된 복잡성(chaos)이론인데 실내공간의 디자인에 있어서도 이러한 자연의 복잡성(complexity)이 새로운 창조 원리로서 자리잡게 되었다. 대표적인 실내 공간 다지이너로서 Nigel Coats를 꼽을 수 가 있는데 그의 무정부적인 (anarchism) 디자인 성향은 자연에서 발견될 수 있는 특징중의 하나라고 할 수 있다. 그가 추구한 복잡성(complexity)은 일본의 동경과 같은 고 밀도(high density)의 적극적 소비 도시(active consuming city)에서 발견되는 지극히 인간적인 도시생활을 만들기 위한 software의 제작이며, 이는 자연이라는 신의 창조물에 근접한 모방이 된다. 본 연구는 Nigel Coats의 작품에서 발견될 수 있는 이러한 무정부주의적 성향이 어떻게 자연의 본질적인 복잡한 (complex) 모습과 관련이 되는가를 통하여 현대 실내디자인의 새로운 방향이 이 시대의 과학적 발견에 따른 복잡성(complexity)과 유관함을 보여준다.

  • PDF

Low-complexity patch projection method for efficient and lightweight point-cloud compression

  • Sungryeul Rhyu;Junsik Kim;Gwang Hoon Park;Kyuheon Kim
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.683-696
    • /
    • 2024
  • The point cloud provides viewers with intuitive geometric understanding but requires a huge amount of data. Moving Picture Experts Group (MPEG) has developed video-based point-cloud compression in the range of 300-700. As the compression rate increases, the complexity increases to the extent that it takes 101.36 s to compress one frame in an experimental environment using a personal computer. To realize real-time point-cloud compression processing, the direct patch projection (DPP) method proposed herein simplifies the complex patch segmentation process by classifying and projecting points according to their geometric positions. The DPP method decreases the complexity of the patch segmentation from 25.75 s to 0.10 s per frame, and the entire process becomes 8.76 times faster than the conventional one. Consequently, this proposed DPP method yields similar peak signal-to-noise ratio (PSNR) outcomes to those of the conventional method at reduced times (4.7-5.5 times) at the cost of bitrate overhead. The objective and subjective results show that the proposed DPP method can be considered when low-complexity requirements are required in lightweight device environments.

Estimating Cost Adjustment Factors of Software Development Projects using Analytic Hierarchy Process (AHP를 이용한 소프트웨어 개발비 보정계수 산정)

  • Kim, Woo-Je;Park, Chan-Kyoo;Shin, Soo-Jeong
    • IE interfaces
    • /
    • v.17 no.spc
    • /
    • pp.1-10
    • /
    • 2004
  • The purpose of this paper is to reorganize cost adjustment factors of software development projects in estimating software development cost, and derive adjustment coefficients of application types and language types by analytical hierarchy process. We constructed a decision-making hierarchy of various criteria which determine the complexity of application types and language types, and conducted a survey on the pairwise comparison among alternatives. Finally, the cost adjustment coefficients of application types and language types were derived by analytic hierarchy process. This paper is the first study in which the analytic hierarchy process was applied to the field of estimating cost adjustment factors of software development projects.

Release Planning in Software Product Lines Using a Genetic Algorithm (유전자 알고리듬을 이용한 소프트웨어 제품라인의 출시 계획 수립)

  • Yoo, Jaewook
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.4
    • /
    • pp.142-148
    • /
    • 2012
  • Release planning for incremental software development is to select and assign features in sequence of releases along a specified planning horizon. It includes the technical precedence inherent in the features, the conflicting priorities as determined by the representative stakeholders, and the balance between required and available resources. The complexity of this consideration is getting more complicated when planning releases in software product lines. The problem is formulated as a precedence-constrained multiple 0-1 knapsack problem. In this research a genetic algorithm is developed for solving the release planning problems in software product lines as well as tests for the proposed solution methodology are conducted using data generated randomly.

Unit Testing for the AUTOSAR Software Component (AUTOSAR 소프트웨어 컴포넌트의 유닛 테스트 방법)

  • Kum, Dae-Hyun;Lee, Seong-Hun;Park, Gwang-Min;Son, Byeong-Jeom
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1061-1065
    • /
    • 2010
  • AUTOSAR, a standard software platform for automotive, has been developed to manage software complexity and improve software reuseability. However Automated and standardized testing is needed to improve reliability and reduce time and effort on testing. Fundamental functionality of AUTOSAR RTE and basic software modules is guaranteed by using an AUTOSAR tool, but application software components have to be tested thoroughly. In this paper, we suggest a test system for the AUTOSAR software component using TTCN-3, a standardized testing language. Test execution system and test cases for the software component are generated automatically from AUTOSAR XML containing software design information. With the proposed testing techniques we can reduce time and effort to build the testing system.

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Relevance of the Cyclomatic Complexity Threshold for the Web Programming (웹 프로그래밍을 위한 복잡도 한계값의 적정성)

  • Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.6
    • /
    • pp.153-161
    • /
    • 2012
  • In this empirical study at the Web environment based on the frequency distribution of the cyclomatic complexity number of the application, the relevance of the threshold has been analyzed with the next two assumptions. The upper bound established by McCabe in the procedural programming equals 10 and the upper bound established by Lopez in the Java programming equals 5. Which numerical value can be adapted to Web application contexts? In order to answer this 10 web site projects have been collected and a sample of more than 4,000 ASP files has been measured. After analyzing the frequency distribution of the cyclomatic complexity of the Web application, experiment result is that more than 90% of Web application have a complexity less than 50 and also 50 is proposed as threshold of Web application. Web application has the complex architecture with Server, Client and HTML, and the HTML side has the high complexity 35~40. The reason of high complexity is that HTML program is usually made of menu type for home page or site map, and the relevance of that has been explained. In the near future we need to find out if there exist some hidden properties of the Web application architecture related to complexity.