• Title/Summary/Keyword: software data

Search Result 9,288, Processing Time 0.036 seconds

Machine Learning Frameworks for Automated Software Testing Tools : A Study

  • Kim, Jungho;Ryu, Joung Woo;Shin, Hyun-Jeong;Song, Jin-Hee
    • International Journal of Contents
    • /
    • v.13 no.1
    • /
    • pp.38-44
    • /
    • 2017
  • Increased use of software and complexity of software functions, as well as shortened software quality evaluation periods, have increased the importance and necessity for automation of software testing. Automating software testing by using machine learning not only minimizes errors in manual testing, but also allows a speedier evaluation. Research on machine learning in automated software testing has so far focused on solving special problems with algorithms, leading to difficulties for the software developers and testers, in applying machine learning to software testing automation. This paper, proposes a new machine learning framework for software testing automation through related studies. To maximize the performance of software testing, we analyzed and categorized the machine learning algorithms applicable to each software test phase, including the diverse data that can be used in the algorithms. We believe that our framework allows software developers or testers to choose a machine learning algorithm suitable for their purpose.

An Evolution of Reliability of large Scale Software of a Switching System (대형 교환 시스템의 소프트웨어 신뢰도 성장)

  • Lee, J.K.;Shin, S.K.;Nam, S.S.;Park, K.C.
    • Electronics and Telecommunications Trends
    • /
    • v.14 no.4 s.58
    • /
    • pp.1-9
    • /
    • 1999
  • In this paper, we summarize the lessons learned from the applications of the software reliability engineering to a large-scale software project. The considered software is the software system of the TDX-10 ISDN switching system. The considered software consists of many components, called functional blocks. These functional blocks serve as the unit of coding and test. The software is continuing to be developed by adding new functional blocks. We are mainly concerned with the analysis of the effects of these software components to software reliability and with the analysis of the reliability evolution. We analyze the static characteristics of the software related to software reliability using failure data collected during system test. We also discussed a pattern which represents a local and global growth of the software reliability as version evolves. To find the pattern of software of the TDX-10 ISDN system, we apply the S-shaped model to a collection of failure data sets of each evolutionary version and the Goel-Okumoto (G-O) model to a grouped overall failure data set. We expect this pattern analysis will be helpful to plan and manage necessary human/resources for a new similar software project which is developed under the same developing circumstances by estimating the total software failures with respect to its size and time.

A Study on the necessity of Open Source Software Intermediaries in the Software Distribution Channel (소프트웨어 유통에 있어 공개소프트웨어 중개자의필요성에 대한 연구)

  • Lee, Seung-Chang;Suh, Eung-Kyo;Ahn, Sung-Hyuck;Park, Hoon-Sung
    • Journal of Distribution Science
    • /
    • v.11 no.2
    • /
    • pp.45-55
    • /
    • 2013
  • Purpose - The development and implementation of OSS (Open Source Software) led to a dramatic change in corporate IT infrastructure, from system server to smart phone, because the performance, reliability, and security functions of OSS are comparable to those of commercial software. Today, OSS has become an indispensable tool to cope with the competitive business environment and the constantly-evolving IT environment. However, the use of OSS is insufficient in small and medium-sized companies and software houses. This study examines the need for OSS Intermediaries in the Software Distribution Channel. It is expected that the role of the OSS Intermediary will be reduced with the improvement of the distribution process. The purpose of this research is to prove that OSS Intermediaries increase the efficiency of the software distribution market. Research design, Data, and Methodology - This study presents the analysis of data gathered online to determine the extent of the impact of the intermediaries on the OSS market. Data was collected using an online survey, conducted by building a personal search robot (web crawler). The survey period lasted 9 days during which a total of 233,021 data points were gathered from sourceforge.net and Apple's App store, the two most popular software intermediaries in the world. The data collected was analyzed using Google's Motion Chart. Results - The study found that, beginning 2006, the production of OSS in the Sourceforge.net increased rapidly across the board, but in the second half of 2009, it dropped sharply. There are many events that can explain this causality; however, we found an appropriate event to explain the effect. It was seen that during the same period of time, the monthly production of OSS in the App store was increasing quickly. The App store showed a contrasting trend to software production. Our follow-up analysis suggests that appropriate intermediaries like App store can enlarge the OSS market. The increase was caused by the appearance of B2C software intermediaries like App store. The results imply that OSS intermediaries can accelerate OSS software distribution, while development of a better online market is critical for corporate users. Conclusion - In this study, we analyzed 233,021 data points on the online software marketplace at Sourceforge.net. It indicates that OSS Intermediaries are needed in the software distribution market for its vitality. It is also critical that OSS intermediaries should satisfy certain qualifications to play a key role as market makers. This study has several interesting implications. One implication of this research is that the OSS intermediary should make an effort to create a complementary relationship between OSS and Proprietary Software. The second implication is that the OSS intermediary must possess a business model that shares the benefits with all the participants (developer, intermediary, and users).The third implication is that the intermediary provides an OSS of high quality like proprietary software with a high level of complexity. Thus, it is worthwhile to examine this study, which proves that the open source software intermediaries are essential in the software distribution channel.

  • PDF

Parameter Estimation and Comparison for SRGMs and ARIMA Model in Software Failure Data

  • Song, Kwang Yoon;Chang, In Hong;Lee, Dong Su
    • Journal of Integrative Natural Science
    • /
    • v.7 no.3
    • /
    • pp.193-199
    • /
    • 2014
  • As the requirement on the quality of the system has increased, the reliability is very important part in terms of enhance stability and to provide high quality services to customers. Many statistical models have been developed in the past years for the estimation of software reliability. We consider the functions for NHPP software reliability model and time series model in software failure data. We estimate parameters for the proposed models from three data sets. The values of SSE and MSE is presented from three data sets. We compare the predicted number of faults with the actual three data sets using the NHPP software reliability model and time series model.

A Framework for Detecting Data Races in Weapon Software (무기체계 소프트웨어의 자료경합을 탐지하기 위한 프레임워크)

  • Oh, Jin-Woo;Choi, Eu-Teum;Jun, Yong-Kee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.305-312
    • /
    • 2018
  • Software has been used to develop many functions of the modern weapon systems which has a high mission criticality. Weapon system software must consider multi-threaded processing to satisfy growing performance requirement. However, developing multi-threaded programs are difficult because of concurrency faults, such as unintended data races. Especially, it is important to prepare analysis for debugging the data races, because the weapon system software may cause personal injury. In this paper, we present an efficient framework of analysis, called ConDeWS, which is designed to determine the scope of dynamic analysis through using the result of static analysis and fault analysis. As a result of applying the implemented framework to the target software, we have detected unintended data races that were not detected in the static analysis.

Development of an Analysis Software for the Load Measurement of Wind Turbines (풍력발전기의 하중 측정을 위한 해석 소프트웨어의 개발)

  • Gil, Kyehwan;Bang, Je-Sung;Chung, Chinwha
    • Journal of Wind Energy
    • /
    • v.4 no.1
    • /
    • pp.20-29
    • /
    • 2013
  • Load measurement, which is performed based on IEC 61400-13, consists of three stages: the stage of collecting huge amounts of load measurement data through a measurement campaign lasting for several months; the stage of processing the measured data, including data validation and classification; and the stage of analyzing the processed data through time series analysis, load statistics analysis, frequency analysis, load spectrum analysis, and equivalent load analysis. In this research, we pursued the development of an analysis software in MATLAB to save labor and to secure exact and consistent performance evaluation data in processing and analyzing load measurement data. The completed analysis software also includes the functions of processing and analyzing power performance measurement data in accordance with IEC 61400-12. The analysis software was effectively applied to process and analyse the load measurement data from a demonstration research for a 750 kW direct-drive wind turbine generator system (KBP-750D), performed at the Daegwanryeong Wind Turbine Demonstration Complex. This paper describes the details of the analysis software and its processing and analysis stages for load measurement data and presents the analysis results.

Development of a GPS Data Processing S/W for Cadastral Survey (지적측량을 위한 GPS 자료처리 S/W 개발)

  • 우인제;이종기;김병국
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.507-512
    • /
    • 2004
  • Research that establish new cadastral survey model that use GPS to introduce GPS observation technique in cadastral survey and research that develop connection technologies are gone abuzz. The purpose of this research is to keep in step in such trend and grasp present condition and performance of surveying connection to common use GPS data processing software, and analyze data processing algorithm, and develop suitable GPS data processing software in our real condition regarding GPS data processing and result of control point calculation. This research studies analysis common use software and error occurrence by data processing method that college and company have. Also, It analyzes algorithm that is applied to existing GPS data processing software. After that we study algorithm that is most suitable with cadastral survey and then develop cadastral survey calculation software for new cadastral control points

  • PDF

Construction of Data Book for Understanding Software Components (소프트웨어 컴포넌트 이해를 위한 데이터 북 구성)

  • Kim, Seon-Hui;Choe, Eun-Man
    • The KIPS Transactions:PartD
    • /
    • v.9D no.3
    • /
    • pp.399-408
    • /
    • 2002
  • Component technology was proposed and applied to software development to overcome software crisis. Software component is a black box like an integrated circuit in hardware but it can not be utilized without good support specially for helping users understand efficiently. This paper shows that data book format for understanding hardware component can be well applied to representing software component. We selected an approach to understand component by matching the contents of data book with UML and API model technique. Besides, we added the architecture part and the interface which are the most important property of software component to the data book for software components. In order to verify effectiveness of components data book we extended batch descriptor in EJB and performed an experiment providing data book to programmers with components.

New Growth Power, Economic Effect Analysis of Software Industry (신성장 동력, 소프트웨어산업의 경제적 파급효과 분석)

  • Choi, Jinho;Ryu, Jae Hong
    • Journal of Information Technology Applications and Management
    • /
    • v.21 no.4_spc
    • /
    • pp.381-401
    • /
    • 2014
  • This study proposes the accurate economic effect (employment inducement coefficient, hiring inducement coefficient, index of the sensitivity of dispersion, index of the power of dispersion, and ratio of value added) of Korea software industry by analyzing the inter-industry relation using the modified inter-industry table. Some previous studies related to the inter-industry analysis were reviewed and the key problems were identified. First, in the current inter-industry table publishedby the Bank of Korea, the output of software industry includes not only the output of pure software industry (package software and IT services) but also the output of non-software industry due to the misclassification of the industry. This causes the output to become bigger than the actual output of the software industry. Second, during rewriting the inter-industry table, the output is changing. The inter-industry table is the table in the form of rows and columns, which records the transactions of goods and services among industries which are required to continue the activities of each industry. Accordingly, if only an output of a specific industry is changed, the reliability of the table would be degraded because the table is prepared based on the relations with other industries. This possibly causes the economic effect coefficient to degrade reliability, over or under estimated. This study tries to correct these problems to get the more accurate economic effect of the software industry. First, to get the output of the pure software section only, the data from the Korea Electronics Association(KEA) was used in the inter-industry table. Second, to prevent the difference in the outputs during rewriting the inter-industry table, the difference between the output in the current inter-industry table and the output from KEA data was identified and then it was defined as the non-software section output for the analysis. The following results were obtained: The pure software section's economic effect coefficient was lower than the coefficient of non-software section. It comes from differenceof data to Bank of Korea and KEA. This study hasa signification from accurate economic effect of Korea software industry.

SYSTEM ANALYSIS OF PIPELINE SOFTWARE - A CASE STUDY OF THE IMAGING SURVEY AT ESO

  • Kim, Young-Soo
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.4
    • /
    • pp.403-416
    • /
    • 2003
  • There are common features, in both imaging surveys and image processing, between astronomical observations and remote sensing. Handling large amounts of data, in an easy and fast way, has become a common issue. Implementing pipeline software can be a solution to the problem, one which allows the processing of various kinds of data automatically. As a case study, the development of pipeline software for the EIS (European Southern Observatory Imaging Survey) is introduced. The EIS team has been conducting a sky survey to provide candidate targets to the 250 VLTs (Very Large Telescopes) observations. The survey data have been processed in a sequence of five major data corrections and reductions, i.e. preprocessing, flat fielding, photometric and astrometric corrections, source extraction, and coaddition. The processed data are eventually distributed to the users. In order to provide automatic processing of the vast volume of observed data, pipeline software has been developed. Because of the complexity of objects and different characteristic of each process, it was necessary to analyze the whole works of the EIS survey program. The overall tasks of the EIS are identified, and the scheme of the EIS pipeline software is defined. The system structure and the processes are presented, and in-depth flow charts are analyzed. During the analyses, it was revealed that handling the data flow and managing the database are important for the data processing. These analyses may also be applied to many other fields which require image processing.