• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.034 seconds

Automatic Test Data Generation for Mutation Testing Using Genetic Algorithms (유전자 알고리즘을 이용한 뮤테이션 테스팅의 테스트 데이터 자동 생성)

  • 정인상;창병모
    • The KIPS Transactions:PartD
    • /
    • v.8D no.1
    • /
    • pp.81-86
    • /
    • 2001
  • one key goal of software testing is to generate a 'good' test data set, which is consideres as the most difficult and time-consuming task. This paper discusses how genetic algorithns can be used for automatic generation of test data set for software testing. We employ mutation testing to show the effectiveness of genetic algorithms (GAs) in automatic test data generation. The approach presented in this paper is different from other in that test generation process requireas no lnowledge of implementation details of a program under test. In addition, we have conducted some experiments and compared our approach with random testing which is also regarded as a black-box test generation technique to show its effectiveness.

  • PDF

DNA Chip Database for the Korean Functional Genomics Project

  • Kim, Sang-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2001.10a
    • /
    • pp.11-28
    • /
    • 2001
  • The Korean functional Genomics Project focuses on stomach and liver cancers. Specimens collected by six hospital teams are used in BNA microarray experiments. Experimental conditions, spot measurement data, and the associated clinical information are stored in a relational database. Microarray database schema was developed based on EBI's ArrayExpress. A diagrammatic representation of the schema is used to help navigate over marty tables in the database. Field description, table-to-table relationship, and other database features are also stored in the database and these are used by a PERL interface program to generate web-based input forms on the fly. As such, it is rather simple to modify the database definition and implement controlled vocabularies. This PERL program is a general-purpose utility which can be used for inputting and updating data in relational databases. It supports file upload and user-supplied filters of uploaded data. Joining related tables is implemented using JavaScripts, allowing this step to be deferred to a later stage. This feature alleviates the pain of inputting data into a multi-table database and promotes collaborative data input among several teams. Pathological finding, clinical laboratory parameters, demographical information, and environmental factors are also collected and stored in a separate database. The same PERL program facilitated developing this database and its user-interface.

  • PDF

Tunable compression of wind tunnel data

  • Possolo, Antonio;Kasperski, Michael;Simiu, Emil
    • Wind and Structures
    • /
    • v.12 no.6
    • /
    • pp.505-517
    • /
    • 2009
  • Synchronous wind-induced pressures, measured in wind-tunnel tests on model buildings instrumented with hundreds of pressure taps, are an invaluable resource for designing safe buildings efficiently. They enable a much more detailed, accurate representation of the forces and moments that drive engineering design than conventional tables and graphs do. However, the very large volumes of data that such tests typically generate pose a challenge to their widespread use in practice. This paper explains how a wavelet representation for the time series of pressure measurements acquired at each tap can be used to compress the data drastically while preserving those features that are most influential for design, and also how it enables incremental data transmission, adaptable to the accuracy needs of each particular application. The loss incurred in such compression is tunable and known. Compression rates as high as 90% induce distortions that are statistically indistinguishable from the intrinsic variability of wind-tunnel testing, which we gauge based on an unusually large collection of replicated tests done under the same wind-tunnel conditions.

A Program Generating and Managing Track Data for Smart Train Route Control (Smart열차진로제어를 위한 선로데이터 생성.관리프로그램)

  • Yoon, Yong-Ki;Lee, Young-Hoon
    • Proceedings of the KSR Conference
    • /
    • 2007.05a
    • /
    • pp.1741-1745
    • /
    • 2007
  • Even though the existing train route controlling method, using track circuits, ensures the sufficient number of operation, it still has problems such as discordance between train numbers which was planned for operating and train numbers being operated on the track, and allowing only one train entering for one route. To solve those problems, we study and develop the Smart train route controlling system which uses the real-time informations of train positions. This system enables improve the coefficient of utilization in a certain train route controlling section, and the safety level of train route controlling, but we should ensure satisfying reliability of data about tracks operated by trains. In addition, there is a need to protect accidents caused by erroneous information of train position, by reflecting changes of tracks, for example maintenance, improvement, expansion of tracks. In this paper, we proposes and describes a developed program which required to change AutoCAD files to wiring diagrams, to generate them to data of tracks, to identify errors of those data, to construct the structure of necessary resource(s) for sub-systems of a train control system, and to provide simulations of data structure.

  • PDF

Data Input and Output of Unstructured Data of Large Capacity (대용량 비정형 데이터 자료 입력 및 출력)

  • Sim, Kyu-Cheol;Kang, Byung-Jun;Kim, Kyung-Hwan;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.613-615
    • /
    • 2013
  • Request to provide a service to XML word file recently has been increasing. In this paper, it is converted to an XML file data input (HWP, MS-Office) a Word file, stored in a database by extracting data directly input to the word processor user creates an XML mapping file I to provide a system that. This can be retrieved from the database the required data to previously created forms word processor, to generate a Word file from the application program a word processing document.

  • PDF

A Real-Time Integrated Hierarchical Temporal Memory Network for the Real-Time Continuous Multi-Interval Prediction of Data Streams

  • Kang, Hyun-Syug
    • Journal of Information Processing Systems
    • /
    • v.11 no.1
    • /
    • pp.39-56
    • /
    • 2015
  • Continuous multi-interval prediction (CMIP) is used to continuously predict the trend of a data stream based on various intervals simultaneously. The continuous integrated hierarchical temporal memory (CIHTM) network performs well in CMIP. However, it is not suitable for CMIP in real-time mode, especially when the number of prediction intervals is increased. In this paper, we propose a real-time integrated hierarchical temporal memory (RIHTM) network by introducing a new type of node, which is called a Zeta1FirstSpecializedQueueNode (ZFSQNode), for the real-time continuous multi-interval prediction (RCMIP) of data streams. The ZFSQNode is constructed by using a specialized circular queue (sQUEUE) together with the modules of original hierarchical temporal memory (HTM) nodes. By using a simple structure and the easy operation characteristics of the sQUEUE, entire prediction operations are integrated in the ZFSQNode. In particular, we employed only one ZFSQNode in each level of the RIHTM network during the prediction stage to generate different intervals of prediction results. The RIHTM network efficiently reduces the response time. Our performance evaluation showed that the RIHTM was satisfied to continuously predict the trend of data streams with multi-intervals in the real-time mode.

Improving Database System Performance by Applying NoSQL

  • Choi, Yong-Lak;Jeon, Woo-Seong;Yoon, Seok-Hwan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.355-364
    • /
    • 2014
  • Internet accessibility has been growing due to the diffusion of smartphones in today's society. Therefore, people can generate data anywhere and are confronted with the challenge that they should process a large amount of data. Since the appearance of relational database management system (RDBMS), most of the recent information systems are built by utilizing it. RDBMS uses foreign-keys to avoid data duplication. The transactions in the database use attributes, such as atomicity, consistency, isolation, durability (ACID), which ensures that data integrity and processing results are stably managed. The characteristic of RDBMS is that there is high data reliability. However, this results in performance degradation. Meanwhile, from among these information systems, some systems only require high-performance rather than high reliability. In this case, if we only consider performance, the use of NoSQL provides many advantages. It is possible to reduce the maintenance cost of the information system that continues to increase in the use of open source software based NoSQL. And has a huge advantage that is easy to use NoSQL. Therefore, in this study, we prove that the leverage of NoSQL will ensure high performance than RDBMS by applying NoSQL to database systems that implement RDBMS.

Conceptual Pattern Matching of Time Series Data using Hidden Markov Model (은닉 마코프 모델을 이용한 시계열 데이터의 의미기반 패턴 매칭)

  • Cho, Young-Hee;Jeon, Jin-Ho;Lee, Gye-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.5
    • /
    • pp.44-51
    • /
    • 2008
  • Pattern matching and pattern searching in time series data have been active issues in a number of disciplines. This paper suggests a novel pattern matching technology which can be used in the field of stock market analysis as well as in forecasting stock market trend. First, we define conceptual patterns, and extract data forming each pattern from given time series, and then generate learning model using Hidden Markov Model. The results show that the context-based pattern matching makes the matching more accountable and the method would be effectively used in real world applications. This is because the pattern for new data sequence carries not only the matching itself but also a given context in which the data implies.

Generation of cutting Path Data for Fully Automated Transfer-type Variable Lamination Manufacturing Using EPS-Foam (완전 자동화된 단속형 가변적층쾌속조형공정을 위한 절단 경로 데이터 생성)

  • 이상호;안동규;김효찬;양동열;박두섭;심용보;채희창
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.599-602
    • /
    • 2002
  • A novel rapid prototyping (RP) process, an automated transfer type variable lamination manufacturing process (Automated VLM-ST) has been developed. In Automated VLM-ST, a vacuum chuck and linear moving system transfer the plate type material with two pilot holes to the rotation stage. A four-axis synchronized hotwire cutter cuts the material twice to generate Automated Unit Shape Layer (AUSL) with the desired width, side slopes, length, and two reference shapes in accordance with CAD data. Each AUSL is stacked on the stacking plate with two pilot pins using the pilot holes in AUSL and the pilot pins. Subsequently, adhesive is supplied to the top surface of the stacked AUSL by a bonding roller and pressure is simultaneously applied to the bottom surface of the stacked AUSL. Finally, three-dimensional shapes are rapidly fabricated. This paper describes the procedure for generating the cutting path data (AUSL data) f3r automated VLM-ST. The method for the generation of the Automated Unit Shape Layer (AUSL) in Automated VLM-ST was practically applied and fabricated for a various shapes.

  • PDF

On-Board Orbit Propagator and Orbit Data Compression for Lunar Explorer using B-spline

  • Lee, Junghyun;Choi, Sujin;Ko, Kwanghee
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.17 no.2
    • /
    • pp.240-252
    • /
    • 2016
  • In this paper, an on-board orbit propagator and compressing trajectory method based on B-spline for a lunar explorer are proposed. An explorer should recognize its own orbit for a successful mission operation. Generally, orbit determination is periodically performed at the ground station, and the computed orbit information is subsequently uploaded to the explorer, which would generate a heavy workload for the ground station and the explorer. A high-performance computer at the ground station is employed to determine the orbit required for the explorer in the parking orbit of Earth. The method not only reduces the workload of the ground station and the explorer, but also increases the orbital prediction accuracy. Then, the data was compressed into coefficients within a given tolerance using B-spline. The compressed data is then transmitted to the explorer efficiently. The data compression is maximized using the proposed methods. The methods are compared with a fifth order polynomial regression method. The results show that the proposed method has the potential for expansion to various deep space probes.