• Title/Summary/Keyword: Software assessment

Search Result 1,062, Processing Time 0.021 seconds

Development of Fagan Inspection Tool for Railway System Vital Software (철도시스템 바이탈 소프트웨어 테스팅을 위한 Fagan Inspection 지원도구의 개발)

  • Hwang, Jong-Gyu;Jo, Hyun-Jeong;Jeong, Ui-Jing;Shin, Kyeung-Ho
    • Proceedings of the KSR Conference
    • /
    • 2009.05a
    • /
    • pp.2056-2062
    • /
    • 2009
  • Recent advances in computer technology have brought more dependence on software to train control systems. Hence, the safety assurance of the vital software running on the railway system is very critical task and yet, not many works have been done. While much efforts have been reported to improve electronic hardware's safety, not so much systematic approaches to evaluate software's safety, especially for the vital software running on board train controllers. In this paper, we have developed the static software testing tool for railway signaling, especially Fagan Inspection supporting tool. This static testing tool for railway signaling can be utilized at the assessment phase, and also usefully at the software development stage also. It is anticipated that it will be greatly helpful for the evaluation on the software for railway signalling system.

  • PDF

A New Methodology for Software Reliability based on Statistical Modeling

  • Avinash S;Y.Srinivas;P.Annan naidu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.157-161
    • /
    • 2023
  • Reliability is one of the computable quality features of the software. To assess the reliability the software reliability growth models(SRGMS) are used at different test times based on statistical learning models. In all situations, Tradational time-based SRGMS may not be enough, and such models cannot recognize errors in small and medium sized applications.Numerous traditional reliability measures are used to test software errors during application development and testing. In the software testing and maintenance phase, however, new errors are taken into consideration in real time in order to decide the reliability estimate. In this article, we suggest using the Weibull model as a computational approach to eradicate the problem of software reliability modeling. In the suggested model, a new distribution model is suggested to improve the reliability estimation method. We compute the model developed and stabilize its efficiency with other popular software reliability growth models from the research publication. Our assessment results show that the proposed Model is worthier to S-shaped Yamada, Generalized Poisson, NHPP.

A Study of Software Coding Rules Inspection Tool for Railway Signaling Software Safety

  • Hwang, Jong-Gyu;Jo, Hyun-Jeong
    • International Journal of Safety
    • /
    • v.8 no.2
    • /
    • pp.31-36
    • /
    • 2009
  • In accordance with the development of recent computer technology, railway signaling software became more complex for the intellectualization. Therefore the importance and dependency of railway signaling system on the computer software is getting more increased further, and the testing for the safety and reliability of railway signaling system software became more important. It is started to become influential as very important issue for the reliability and safety of vital embedded software like railway signaling system. The software coding which can have an effect on the safety at the coding level of software shall not be included preferentially, for the safety of software, and must be checked. This thesis suggested an automated testing tool for coding rules on this railway signaling system software, and presented its applied result for railway signaling system software. The testing items in the implemented tool had referred to the international standards in relation to the software for railway system and MISRA-C standards. This automated testing tool for railway signaling system can be utilized at the assessment stage for railway signaling system software also, and it is anticipated that it can be utilized usefully at the software development stage also.

Development of a Process Capability Assessment Method for Process-based Industries (공정기반 산업의 프로세스 인프라 역량 평가 방법 제안 및 적용)

  • Kang, Young-Mo;Im, Byeong-Hyeok;Yoon, Byun-Gun;Lee, Sung-Joo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.1
    • /
    • pp.16-23
    • /
    • 2012
  • Recently, as organizational systems have become larger and more complicated, the evaluation for their efficiency and effectiveness has become more difficult but important. It is essential to understand the current strength and weakness of the organizational process. It can be a starting point for improving the efficiency and effectiveness of the organizational systems, because the quality of system outputs depend greatly on the capability of system process. Particularly in such process-based industries as semiconductor, energy or software industries, an assessment of process capability is more highlighted to gain knowledge of the expected quality and reliability of system outputs. As a result, much attention has been given to the issues of process capability assessment in the process-based industries. However, most of the previous research in those industries is based on case studies, a more generalized method for process capability assessment is in need for help more companies improve their processes. Therefore, this study aims to propose a process capability assessment method and apply the proposed method to an energy company. This research argues that the process capability is composed of individual and organizational capabilities of the process. Then, the concept of Capability Maturity Model Integration, which was initially suggested to evaluate the software development process, was introduced to develop the assessment tools and process. Finally, the proposed method was applied to a Korean company in the energy industry sector to verify its utility. The research outputs are expected to help more firms assess their process capability and ultimately improve the process.

Comparison of Deterministic and Probabilistic Approaches through Cases of Exposure Assessment of Child Products (어린이용품 노출평가 연구에서의 결정론적 및 확률론적 방법론 사용실태 분석 및 고찰)

  • Jang, Bo Youn;Jeong, Da-In;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.3
    • /
    • pp.223-232
    • /
    • 2017
  • Objectives: In response to increased interest in the safety of children's products, a risk management system is being prepared through exposure assessment of hazardous chemicals. To estimate exposure levels, risk assessors are using deterministic and probabilistic approaches to statistical methodology and a commercialized Monte Carlo simulation based on tools (MCTool) to efficiently support calculation of the probability density functions. This study was conducted to analyze and discuss the usage patterns and problems associated with the results of these two approaches and MCTools used in the case of probabilistic approaches by reviewing research reports related to exposure assessment for children's products. Methods: We collected six research reports on exposure and risk assessment of children's products and summarized the deterministic results and corresponding underlying distributions for exposure dose and concentration results estimated through deterministic and probabilistic approaches. We focused on mechanisms and differences in the MCTools used for decision making with probabilistic distributions to validate the simulation adequacy in detail. Results: The estimation results of exposure dose and concentration from the deterministic approaches were 0.19-3.98 times higher than the results from the probabilistic approach. For the probabilistic approach, the use of lognormal, Student's T, and Weibull distributions had the highest frequency as underlying distributions of the input parameters. However, we could not examine the reasons for the selection of each distribution because of the absence of test-statistics. In addition, there were some cases estimating the discrete probability distribution model as the underlying distribution for continuous variables, such as weight. To find the cause of abnormal simulations, we applied two MCTools used for all reports and described the improper usage routes of MCTools. Conclusions: For transparent and realistic exposure assessment, it is necessary to 1) establish standardized guidelines for the proper use of the two statistical approaches, including notes by MCTool and 2) consider the development of a new software tool with proper configurations and features specialized for risk assessment. Such guidelines and software will make exposure assessment more user-friendly, consistent, and rapid in the future.

Using Fuzzy Neural Network to Assess Network Video Quality

  • Shi, Zhiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2377-2389
    • /
    • 2022
  • At present people have higher and higher requirements for network video quality, but video quality will be impaired by various factors, so video quality assessment has become more and more important. This paper focuses on the video quality assessment method using different fuzzy neural networks. Firstly, the main factors that impair the video quality are introduced, such as unit time jamming times, average pause time, blur degree and block effect. Secondly, two fuzzy neural network models are used to build the objective assessment method. By adjusting the network structure to optimize the assessment model, the objective assessment value of video quality is obtained. Meanwhile the advantages and disadvantages of the two models are analysed. Lastly, the proposed method is compared with many recent related assessment methods. This paper will give the experimental results and the detail of assessment process.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

A Study on Software algorithm for Processing n-key roll-over at Matrix Keyboard (매트릭스 구성 키보드의 n-키 롤-오버 처리를 위한 소프트웨어 알고리즘에 관한 연구)

  • Jun, Ho-Ik;Lee, Hyun-Chang
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.1
    • /
    • pp.89-94
    • /
    • 2020
  • In this paper, we propose a software algorithm that can configure n-key roll-over that detects all keys without limitation on the number of pressed keys in the dynamic scanning detection of a keyboard composed of a matrix. The proposed algorithm uses the timer interrupt of the microcontroller for computer keyboard control, so that a constant and accurate detection interval can be obtained, and an accurate debounce time can be provided. In order to confirm the effectiveness of the proposed algorithm, a microcontroller was connected to a toy keyboard constructed in the form of a clavier and experiments were conducted. As a result of the experiment, it was confirmed that detection of all keys was performed accurately regardless of the number of keys pressed.

Defect Severity-based Dimension Reduction Model using PCA (PCA를 적용한 결함 심각도 기반 차원 축소 모델)

  • Kwon, Ki Tae;Lee, Na-Young
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.1
    • /
    • pp.79-86
    • /
    • 2019
  • Software dimension reduction identifies the commonality of elements and extracts important feature elements. So it reduces complexity by simplify and solves multi-collinearity problems. And it reduces redundancy by performing redundancy and noise detection. In this study, we proposed defect severity-based dimension reduction model. Proposed model is applied defect severity-based NASA dataset. And it is verified the number of dimensions in the column that affect the severity of the defect. Then it is compares and analyzes the dimensions of the data before and after reduction. In this study experiment result, the number of dimensions of PC4's dataset is 2 to 3. It was possible to reduce the dimension.

Design and Implementation of Tor Traffic Collection System Using Multiple Virtual Machines (다수의 가상머신을 이용한 토르 트래픽 수집 시스템 설계 및 구현)

  • Choi, Hyun-Jae;Kim, Hyun-Soo;Shin, Dong-Myung
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • We intend to collect and analyze traffic efficiently in order to detect copyright infringement that illegally share contents on Tor network. We have designed and implemented a Tor traffic collection system using multiple virtual machines. We use a number of virtual machines and Mini PCs as clients to connect to Tor network, and automate both the collection and refinement processes in the traffic collection server through script-based test client software. Through this system, only the necessary field data on Tor network can be stored in the database, and only 95% or more of recognition of Tor traffic is achieved.