• Title/Summary/Keyword: Generate Data

Search Result 3,065, Processing Time 0.03 seconds

A Method to Automatically Generate Test Scripts from Checklist for Testing Embedded System (임베디드 시스템 테스팅을 위한 체크리스트로부터 테스트 스크립트 자동 생성 방안)

  • Kang, Tae Hoon;Kim, Dae Joon;Chung, Ki Hyun;Choi, Kyung Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.12
    • /
    • pp.641-652
    • /
    • 2016
  • This paper proposes a method to generate test scripts in an automatic manner, based on checklist used for testing embedded systems in the fields. The proposed method can reduce the mistakes which may be introduced during manual generation. In addition, it can generate test scripts to test various mode combinations, which is not possible to be tested by the typical checklist. The test commands in a checklist are transformed into a test script suit referencing the signal values defined in a test command dictionary. In addition, the method to generate test scripts in sequential, double permutation and random manners is proposed useful to test the inter-operations between modes, a series of operations for a specific behavior. The proposed method is implemented and the feasibility is shown through the experiments.

Assessment of Priority Order Using the Chemical to Cause to Generate Occupational Diseases and Classification by GHS (직업병발생 물질과 GHS분류 자료를 이용한 화학물질 우선순위 평가)

  • Baik, Nam-Sik;Chung, Jin-Do;Park, Chan-Hee
    • Journal of Environmental Science International
    • /
    • v.19 no.6
    • /
    • pp.715-735
    • /
    • 2010
  • This study is designed to assess the priority order of the chemicals to cause to generate occupational diseases in order to understand the fundamental data required for the preparation of health protective measure for the workers dealing with chemicals. The 41 types of 51 ones of chemicals to cause to generate the national occupational diseases were selected as the study objects by understanding their domestic use or not, and their occupational diseases' occurrence or not among 110,608 types of domestic and overseas chemicals. To assess their priority order the sum of scores was acquired by understanding the actually classified condition based on a perfect score of physical riskiness(90points) and health toxicity(92points) as a classification standard by GHS, the priority order on GHS riskiness assessment, GHS toxicity assessment, GHS toxic xriskiness assessment(sum of riskiness plus toxicity) was assessed by multiplying each result by each weight of occupational disease's occurrence. The high ranking 5 items of chemicals for GHS riskiness assessment were turned out to be urethane, copper, chlorine, manganese, and thiomersal by order. Besides as a result of GHS toxicity assessment the top fives were assessed to be aluminum, iron oxide, manganese, copper, and cadium(Metal) by order. On the other hand, GHS toxicity riskiness assessment showed that the top fives were assessed to be copper, urethane, iron oxide, chlorine and phenanthrene by order. As there is no material or many uncertain details for physical riskiness or health toxicity by GHS classification though such materials caused to generate the national occupational diseases, it is very urgent to prepare its countermeasure based on the forementioned in order to protect the workers handling or being exposed to chemicals from health.

Interlinking Open Government Data in Korea using Administrative District Knowledge Graph

  • Kim, Haklae
    • Journal of Information Science Theory and Practice
    • /
    • v.6 no.1
    • /
    • pp.18-30
    • /
    • 2018
  • Interest in open data is continuing to grow around the world. In particular, open government data are considered an important element in securing government transparency and creating new industrial values. The South Korean government has enacted legislation on opening public data and provided diversified policy and technical support. However, there are also limitations to effectively utilizing open data in various areas. This paper introduces an administrative district knowledge model to improve the sharing and utilization of open government data, where the data are semantically linked to generate a knowledge graph that connects various data based on administrative districts. The administrative district knowledge model semantically models the legal definition of administrative districts in South Korea, and the administrative district knowledge graph is linked to data that can serve as an administrative basis, such as addresses and postal codes, for potential use in hospitals, schools, and traffic control.

Heterogeneous Ensemble of Classifiers from Under-Sampled and Over-Sampled Data for Imbalanced Data

  • Kang, Dae-Ki;Han, Min-gyu
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Data imbalance problem is common and causes serious problem in machine learning process. Sampling is one of the effective methods for solving data imbalance problem. Over-sampling increases the number of instances, so when over-sampling is applied in imbalanced data, it is applied to minority instances. Under-sampling reduces instances, which usually is performed on majority data. We apply under-sampling and over-sampling to imbalanced data and generate sampled data sets. From the generated data sets from sampling and original data set, we construct a heterogeneous ensemble of classifiers. We apply five different algorithms to the heterogeneous ensemble. Experimental results on an intrusion detection dataset as an imbalanced datasets show that our approach shows effective results.

The Determination of Earthwork Volume using LiDAR Data (LiDAR 데이터를 이용한 토공량 산정)

  • Kang Joon-Mook;Yoon Hee-Cheon;Min Kwan-Sik;We Gwang-Jae
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.533-540
    • /
    • 2006
  • In recent years, civil-engineering work is desired the terrain information to be more efficient in earthwork volume calculation. One method for collecting elevation data is LiDAR. Lidar data was used to produce rapidly an accurate digital elevation model of the terrain, compared with the conventional ground surveys, photogrammetty, and remote sensing. Raw Lidar data is combined with GPS positional data to georeference the data sets. Lidar data is edited and processed to generate surface models, elevation models, and contours. Here we can either create a Tin Volume Surface or a Gird Volume Surface. Triangulated Irregular Network(TIN) has complex data structure, but it can describe well terrain surface features. As we have seen, we search the efficiency for earthwork volume calculation using Lidar data. One conclusion we can draw from this study is that Lidar data is more accurate result than digital map in the calculation of earthwork volume.

  • PDF

Exploring the role of referral efficacy in the relationship between consumer innovativeness and intention to generate word of mouth

  • Yoo, Chul Woo;Jin, Sung;Sanders, G. Lawrence
    • Agribusiness and Information Management
    • /
    • v.5 no.2
    • /
    • pp.27-37
    • /
    • 2013
  • Referral marketing plays an important role in promoting new products. When it comes to innovative agricultural products, early adopter's review or recommendation has a more critical impact on follower's purchase decision making. Hence, understanding of consumer's characteristics and needs play more important role in success of innovation. More particularly, other researchers pay attention to the role of consumer innovativeness. This study attempts to fill this gap in knowledge between innovative propensity of consumer and her/his intention to generate positive word of mouth about new agricultural products. Furthermore, in this paper, we adopt Vandecasteele and Geunes' motivated consumer innovativeness model to investigate consumer innovativeness in extrinsic motive and intrinsic motive level, and examine the moderating role of referral efficacy. For empirical verification, survey method is used for data collection. Partial least square (PLS) is adopted to analyze the data. Finally, several theoretical contributions and practical implications are discussed.

Generating of Pareto frontiers using machine learning (기계학습을 이용한 파레토 프런티어의 생성)

  • Yun, Yeboon;Jung, Nayoung;Yoon, Min
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.495-504
    • /
    • 2013
  • Evolutionary algorithms have been applied to multi-objective optimization problems by approximation methods using computational intelligence. Those methods have been improved gradually in order to generate more exactly many approximate Pareto optimal solutions. The paper introduces a new method using support vector machine to find an approximate Pareto frontier in multi-objective optimization problems. Moreover, this paper applies an evolutionary algorithm to the proposed method in order to generate more exactly approximate Pareto frontiers. Then a decision making with two or three objective functions can be easily performed on the basis of visualized Pareto frontiers by the proposed method. Finally, a few examples will be demonstrated for the effectiveness of the proposed method.

Model-based Test Cases Generation Method for Weapons System Software (무기체계 소프트웨어의 모델 기반 테스트 케이스 생성 방법)

  • Choi, Hyunjae;Lee, Youngwoo;Baek, Jisun;Kim, Donghwan;Cho, Kyutae;Chae, Heungseok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.4
    • /
    • pp.389-398
    • /
    • 2020
  • Test cases in the existing weapon system software were created manually by the tester analyzing the test items defined in the software integration test procedure. However, existing test case generation method has two limitations. First, the quality of test cases can vary depending on the tester's ability to analyze the test items. Second, excessive time and cost may be incurred in writing test cases. This paper proposes a method to automatically generate test cases based on the requirements model and specifications to overcome the limitations of the existing weapon system software test case generation. Generate test sequences and test data based on the use case event model, a model representing the requirements of the weapon system software, and the use case specification specifying the requirements. The proposed method was applied to 8 target models constituting the avionics control system, producing 30 test sequences and 8 test data.

Contribution to Improve Database Classification Algorithms for Multi-Database Mining

  • Miloudi, Salim;Rahal, Sid Ahmed;Khiat, Salim
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.709-726
    • /
    • 2018
  • Database classification is an important preprocessing step for the multi-database mining (MDM). In fact, when a multi-branch company needs to explore its distributed data for decision making, it is imperative to classify these multiple databases into similar clusters before analyzing the data. To search for the best classification of a set of n databases, existing algorithms generate from 1 to ($n^2-n$)/2 candidate classifications. Although each candidate classification is included in the next one (i.e., clusters in the current classification are subsets of clusters in the next classification), existing algorithms generate each classification independently, that is, without taking into account the use of clusters from the previous classification. Consequently, existing algorithms are time consuming, especially when the number of candidate classifications increases. To overcome the latter problem, we propose in this paper an efficient approach that represents the problem of classifying the multiple databases as a problem of identifying the connected components of an undirected weighted graph. Theoretical analysis and experiments on public databases confirm the efficiency of our algorithm against existing works and that it overcomes the problem of increase in the execution time.

Auto Configuration Module for Logstash in Elasticsearch Ecosystem

  • Ahmed, Hammad;Park, Yoosang;Choi, Jongsun;Choi, Jaeyoung
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.39-42
    • /
    • 2018
  • Log analysis and monitoring have a significant importance in most of the systems. Log management has core importance in applications like distributed applications, cloud based applications, and applications designed for big data. These applications produce a large number of log files which contain essential information. This information can be used for log analytics to understand the relevant patterns from varying log data. However, they need some tools for the purpose of parsing, storing, and visualizing log informations. "Elasticsearch, Logstash, and Kibana"(ELK Stack) is one of the most popular analyzing tools for log management. For the ingestion of log files configuration files have a key importance, as they cover all the services needed to input, process, and output the log files. However, creating configuration files is sometimes very complicated and time consuming in many applications as it requires domain expertise and manual creation. In this paper, an auto configuration module for Logstash is proposed which aims to auto generate the configuration files for Logstash. The primary purpose of this paper is to provide a mechanism, which can be used to auto generate the configuration files for corresponding log files in less time. The proposed module aims to provide an overall efficiency in the log management system.