• Title/Summary/Keyword: Data Processing Software

Search Result 2,345, Processing Time 0.026 seconds

Software Reliability Prediction Using Predictive Filter (예측필터를 이용한 소프트웨어 신뢰성 예측)

  • Park, Jung-Yang;Lee, Sang-Un;Park, Jae-Heung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2076-2085
    • /
    • 2000
  • Almost all existing software reliability models are based on the assumptions of he software usage and software failure process. There, therefore, is no universally applicable software reliability model. To develop a universal software reliability model this paper suggests the predictive filter as a general software reliability prediction model for time domain failure data. Its usefulness is empirically verified by analyzing the failure datasets obtained from 14 different software projects. Based on the average relative prediction error, the suggested predictive filter is compared with other well-known neural network models and statistical software reliability growth models. Experimental results show that the predictive filter generally results in a simple model and adapts well across different software projects.

  • PDF

Design and Implementation of Distributed Charge Signal Processing Software for Smart Slow and Quick Electric Vehicle Charge

  • Chang, Tae Uk;Ryu, Young Su;Song, Seul Ki;Kwon, Ki Won;Paik, Jong Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1674-1688
    • /
    • 2019
  • As environmental pollution and fossil fuel energy problems from fuel vehicle have occurred, the interest of electric vehicle(EV) has increased. EV industry and energy industry have grown dynamically in these days. It is expected that the next generation of primary transportation will be EV, and it is necessary to prepare EV infra and efficient energy management such as EV communication protocol, EV charge station, and smart grid. Those EV and energy industry fields are now on growth. Also, the study and development of them are now in progress. In this paper, distributed charge signal processing software for smart slow and quick EV charge is proposed and designed for dealing with EV charge demand. The software consists of smart slow and quick EV charge schedule engine and EV charge power distribution core. The software is designed to support two charge station types. One is normal EV charge station and the other is bus garage EV charge station. Both two types collect the data from EV charge stations, and then analyze the collected data. The software suggests optimized EV charge schedule and deliveries EV charge power distribution information to power switchboard system, and the designed software is implemented on embedded system. It is expected that the software provides efficient EV charge schedule.

Signal Processing and Data Management in SiMACS (SiMACS에서의 생체신호처리 및 데이터관리)

  • Suh, J.J.;Kim, J.J.;Lee, S.B.;Park, S.H.;Woo, E.J.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.05
    • /
    • pp.57-59
    • /
    • 1994
  • In this paper, we present the software part of the intelligent data processing unit (IDPU), which plays an important role in SiMACS. The software system processes ECG, EEG, EMG, blood pressure, respiration, temperature signals, and extracts some information about patient conditions. It displays the patient condition information and the signal data synchronously, and manages them together with other patient personal data in a network-based client/server environment. The software system is designed in an object-oriented paradigm, and implemented in C++ as a window-based application program.

  • PDF

A Framework for Detecting Data Races in Weapon Software (무기체계 소프트웨어의 자료경합을 탐지하기 위한 프레임워크)

  • Oh, Jin-Woo;Choi, Eu-Teum;Jun, Yong-Kee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.305-312
    • /
    • 2018
  • Software has been used to develop many functions of the modern weapon systems which has a high mission criticality. Weapon system software must consider multi-threaded processing to satisfy growing performance requirement. However, developing multi-threaded programs are difficult because of concurrency faults, such as unintended data races. Especially, it is important to prepare analysis for debugging the data races, because the weapon system software may cause personal injury. In this paper, we present an efficient framework of analysis, called ConDeWS, which is designed to determine the scope of dynamic analysis through using the result of static analysis and fault analysis. As a result of applying the implemented framework to the target software, we have detected unintended data races that were not detected in the static analysis.

Language-based Classification of Words using Deep Learning (딥러닝을 이용한 언어별 단어 분류 기법)

  • Zacharia, Nyambegera Duke;Dahouda, Mwamba Kasongo;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.411-414
    • /
    • 2021
  • One of the elements of technology that has become extremely critical within the field of education today is Deep learning. It has been especially used in the area of natural language processing, with some word-representation vectors playing a critical role. However, some of the low-resource languages, such as Swahili, which is spoken in East and Central Africa, do not fall into this category. Natural Language Processing is a field of artificial intelligence where systems and computational algorithms are built that can automatically understand, analyze, manipulate, and potentially generate human language. After coming to discover that some African languages fail to have a proper representation within language processing, even going so far as to describe them as lower resource languages because of inadequate data for NLP, we decided to study the Swahili language. As it stands currently, language modeling using neural networks requires adequate data to guarantee quality word representation, which is important for natural language processing (NLP) tasks. Most African languages have no data for such processing. The main aim of this project is to recognize and focus on the classification of words in English, Swahili, and Korean with a particular emphasis on the low-resource Swahili language. Finally, we are going to create our own dataset and reprocess the data using Python Script, formulate the syllabic alphabet, and finally develop an English, Swahili, and Korean word analogy dataset.

Research on language numerlization and data matching through natural language processing and tensorflow (자연어 처리와 텐서플로를 통한 언어표현 수치화 및 데이터 매칭에 대한 연구)

  • Kim, Eunjin;Kim, Jihye;Kim, Chihun;Bae, Chaeeun;Kim, Youngjong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.571-572
    • /
    • 2019
  • 일상생활에서 사람들은 각자 자신의 맞춤형 생활을 원한다. 특히 쇼핑이나 강의 등 직접 사용한 자의 후기에 따라 구매를 하는 경우에는 선택이 중요하다. 따라서 이 연구를 통해 머신러닝의 속성 범주화로 사용자에게 꼭 맞는 제품과 강의를 연결 할 수 있도록 한다.

Analysis of Impact on ERP Customization Module Using CSR Data

  • Yoo, Byung-Keun;Kim, Seung-Hee
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.473-488
    • /
    • 2021
  • The enterprise resource planning (ERP) system is a standardized and advanced business process that many companies are implementing now-a-days through customization. However, it affects the efficiency of operations as these customizations are based on uniqueness. In this study, we analyzed the impact of customized modules and processing time on customer service request (CSR), by utilizing the stacked CSR data during the construction and operation of ERP, focusing on small and medium-sized enterprises (SMEs). As a result, a positive correlation was found between unit companies and the length of ERP implementation; ERP modules and the length of ERP implementation; ERP modules and unit companies; and the type of ERP implementation and ERP module. In terms of CSR, a comparison of CSR processing time of CBO (customized business object) module and STD (standard) module revealed that while the five modules did not display statistically significant differences, one module demonstrated a statistically very significant difference. In sum, the analysis indicates that the CBO-type CSR and its processing cost are higher than those of STD-type CSR. These results indicate that companies planning to implement an ERP system should consider the ERP module and their customization ratio and level. It not only gives the theoretical validity that should be considered as an indicator for decision making when ERP is constructed, but also its implications on the impact of processing time suggesting that the maintenance costs and project scheduling of ERP software must also be considered. This study is the first to present the degree of impact on the operation and maintenance of customized modules based on actual data and can provide a theoretical basis for applying SW change ratio in the cost estimation of ERP system maintenance.

The Analysis of the GPS Data Processing of the NGII CORS by Bernese and TGO (Bernese와 TGO에 의한 국내 GPS 상시관측소 자료처리 결과 분석)

  • Kim, Ji-Woon;Kwon, Jay-Hyoun;Lee, Ji-Sun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.6
    • /
    • pp.549-559
    • /
    • 2008
  • This study verified the limitations of commercial GPS data processing software and the applicability on precise positioning through comparing the processing results between Bernese and TGO under various conditions. To achieve the goal, we selected three nationwide station data and two smaller local data to constitute networks. By using Bernese and TGO, those networks are processed through the baseline analysis and the network adjustment. The comparative analysis was carried out, in terms of software, baseline length and network scale, observation duration, and number of fixed points. In the comparison between softwares, the scientific software was excellent in accuracy. It was confirmed that, as GPS-related technology is developed, the performance of the receiver was enhanced. And, in parallel with this, even the functionalities of the commercial software were tremendously enhanced. The difference, however, in result between the scientific and commercial software are still exist even if it is not big. Therefore, this study confirms that the scientific software should be used when the most precise position is necessary to be computed, especially if baseline vectors are big.

Development of an Emissions Processing System for Climate Scenario Inventories to Support Global and Asian Air Quality Modeling Studies

  • Choi, Ki-Chul;Lee, Jae-Bum;Woo, Jung-Hun;Hong, Sung-Chul;Park, Rokjin J.;Kim, Minjoong J.;Song, Chang-Keun;Chang, Lim-Seok
    • Asian Journal of Atmospheric Environment
    • /
    • v.11 no.4
    • /
    • pp.330-343
    • /
    • 2017
  • Climate change is an important issue, with many researches examining not only future climatic conditions, but also the interaction of climate and air quality. In this study, a new version of the emissions processing software tool - Python-based PRocessing Operator for Climate and Emission Scenarios (PROCES) - was developed to support climate and atmospheric chemistry modeling studies. PROCES was designed to cover global and regional scale modeling domains, which correspond to GEOS-Chem and CMAQ/CAMx models, respectively. This tool comprises of one main system and two units of external software. One of the external software units for this processing system was developed using the GIS commercial program, which was used to create spatial allocation profiles as an auxiliary database. The SMOKE-Asia emissions modeling system was linked to the main system as an external software, to create model-ready emissions for regional scale air quality modeling. The main system was coded in Python version 2.7, which includes several functions allowing general emissions processing steps, such as emissions interpolation, spatial allocation and chemical speciation, to create model-ready emissions and auxiliary inputs of SMOKE-Asia, as well as user-friendly functions related to emissions analysis, such as verification and visualization. Due to its flexible software architecture, PROCES can be applied to any pregridded emission data, as well as regional inventories. The application results of our new tool for global and regional (East Asia) scale modeling domain under RCP scenario for the years 1995-2006, 2015-2025, and 2040-2055 was quantitatively in good agreement with the reference data of RCPs.