• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.032 seconds

Fault Injection Based Indirect Interaction Testing Approach for Embedded System (임베디드 시스템의 결함 주입 기반 간접 상호작용 테스팅 기법)

  • Hossain, Muhammad Iqbal;Lee, Woo Jin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.9
    • /
    • pp.419-428
    • /
    • 2017
  • In an embedded system, modules exchange data by interacting among themselves. Exchanging erroneous resource data among modules may lead to execution errors. The interacting resources produce dependencies between the two modules where any change of the resources by one module affects the functionality of another module. Several investigations of the embedded systems show that interaction faults between the modules are one of the major cause of critical software failure. Therefore, interaction testing is an essential phase for reducing the interaction faults and minimizing the risk. The direct and indirect interactions between the modules generate interaction faults. The direct interaction is the explicit call relation between the modules, and the indirect interaction is the remaining relation that is made underneath the interface that possesses data dependence relationship with resources. In this paper, we investigate the errors that are based on the indirect interaction between modules and introduce a new test criterion for identifying the errors that are undetectable by existing approaches at the integration level. We propose a novel approach for generating the interaction model using the indirect interaction pattern and design test criteria that are based on different interaction errors to generate test cases. Finally, we use the fault injection technique to evaluate the feasibility and effectiveness of our approach.

Use of Space-time Autocorrelation Information in Time-series Temperature Mapping (시계열 기온 분포도 작성을 위한 시공간 자기상관성 정보의 결합)

  • Park, No-Wook;Jang, Dong-Ho
    • Journal of the Korean association of regional geographers
    • /
    • v.17 no.4
    • /
    • pp.432-442
    • /
    • 2011
  • Climatic variables such as temperature and precipitation tend to vary both in space and in time simultaneously. Thus, it is necessary to include space-time autocorrelation into conventional spatial interpolation methods for reliable time-series mapping. This paper introduces and applies space-time variogram modeling and space-time kriging to generate time-series temperature maps using hourly Automatic Weather System(AWS) temperature observation data for a one-month period. First, temperature observation data are decomposed into deterministic trend and stochastic residual components. For trend component modeling, elevation data which have reasonable correlation with temperature are used as secondary information to generate trend component with topographic effects. Then, space-time variograms of residual components are estimated and modelled by using a product-sum space-time variogram model to account for not only autocorrelation both in space and in time, but also their interactions. From a case study, space-time kriging outperforms both conventional space only ordinary kriging and regression-kriging, which indicates the importance of using space-time autocorrelation information as well as elevation data. It is expected that space-time kriging would be a useful tool when a space-poor but time-rich dataset is analyzed.

  • PDF

Design of Metadata Model and Development of Management System for Electronic Documents on the Web (Web상의 전자문서를 위한 메타데이터 모델의 제안 및 관리시스템의 개발)

  • Jung, Hyo-Taeg;Yang, Young-Jong;Kim, Soon-Yong;Lee, Sang-Duk;Choy, Yoon-Chul
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.4
    • /
    • pp.924-941
    • /
    • 1998
  • It is not easy to access to the required data from the Web by using search engines because there are too many data selected and they do not provide enough information related to the corresponding data. Metadata is data about data. It includes information about data itself and contents of data as well. Users can acquire enough information about the corresponding data and access to the required data exactly using metadata, and therefore the data usability will be increased. In this paper, several metadata technologies and metadata models that are already in process of standardization or adopted as standards are analyzed, and the SeriCore Metadata Model for documents such as papers, project reports, technical reports, abstracts, and manuals, and graphic images that are in the field of science technologies on the Web is proposed. The SeriCore Metadata Management System that can generate, store, and retrieve metadata effectively is designed and implemented based on the SeriCore Metadata Model.

  • PDF

Geostatistical Simulation of Compositional Data Using Multiple Data Transformations (다중 자료 변환을 이용한 구성 자료의 지구통계학적 시뮬레이션)

  • Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.35 no.1
    • /
    • pp.69-87
    • /
    • 2014
  • This paper suggests a conditional simulation framework based on multiple data transformations for geostatistical simulation of compositional data. First, log-ratio transformation is applied to original compositional data in order to apply conventional statistical methodologies. As for the next transformations that follow, minimum/maximum autocorrelation factors (MAF) and indicator transformations are sequentially applied. MAF transformation is applied to generate independent new variables and as a result, an independent simulation of individual variables can be applied. Indicator transformation is also applied to non-parametric conditional cumulative distribution function modeling of variables that do not follow multi-Gaussian random function models. Finally, inverse transformations are applied in the reverse order of those transformations that are applied. A case study with surface sediment compositions in tidal flats is carried out to illustrate the applicability of the presented simulation framework. All simulation results satisfied the constraints of compositional data and reproduced well the statistical characteristics of the sample data. Through surface sediment classification based on multiple simulation results of compositions, the probabilistic evaluation of classification results was possible, an evaluation unavailable in a conventional kriging approach. Therefore, it is expected that the presented simulation framework can be effectively applied to geostatistical simulation of various compositional data.

An Adaptive Query Processing System for XML Stream Data (XML 스트림 데이타에 대한 적응력 있는 질의 처리 시스템)

  • Kim Young-Hyun;Kang Hyun-Chul
    • Journal of KIISE:Databases
    • /
    • v.33 no.3
    • /
    • pp.327-341
    • /
    • 2006
  • As we are getting to deal with more applications that generate streaming data such as sensor network, monitoring, and SDI (selective dissemination of information), active research is being conducted to support efficient processing of queries over streaming data. The applications on the Web environment like SDI, among others, require query processing over streaming XML data, and its investigation is very important because XML has been established as the standard for data exchange on the Web. One of the major problems with the previous systems that support query processing over streaming XML data is that they cannot deal adaptively with dynamically changing stream because they rely on static query plans. On the other hand, the stream query processing systems based on relational data model have achieved adaptiveness in query processing due to query operator routing. In this paper, we propose a system of adaptive query processing over streaming XML data in which the model of adaptive query processing over streaming relational data is applied. We compare our system with YFiiter, one of the representative systems that provide XML stream query processing capability, to show efficiency of our system.

A Comparative Study between Stock Price Prediction Models Using Sentiment Analysis and Machine Learning Based on SNS and News Articles (SNS와 뉴스기사의 감성분석과 기계학습을 이용한 주가예측 모형 비교 연구)

  • Kim, Dongyoung;Park, Jeawon;Choi, Jaehyun
    • Journal of Information Technology Services
    • /
    • v.13 no.3
    • /
    • pp.221-233
    • /
    • 2014
  • Because people's interest of the stock market has been increased with the development of economy, a lot of studies have been going to predict fluctuation of stock prices. Latterly many studies have been made using scientific and technological method among the various forecasting method, and also data using for study are becoming diverse. So, in this paper we propose stock prices prediction models using sentiment analysis and machine learning based on news articles and SNS data to improve the accuracy of prediction of stock prices. Stock prices prediction models that we propose are generated through the four-step process that contain data collection, sentiment dictionary construction, sentiment analysis, and machine learning. The data have been collected to target newspapers related to economy in the case of news article and to target twitter in the case of SNS data. Sentiment dictionary was built using news articles among the collected data, and we utilize it to process sentiment analysis. In machine learning phase, we generate prediction models using various techniques of classification and the data that was made through sentiment analysis. After generating prediction models, we conducted 10-fold cross-validation to measure the performance of they. The experimental result showed that accuracy is over 80% in a number of ways and F1 score is closer to 0.8. The result can be seen as significantly enhanced result compared with conventional researches utilizing opinion mining or data mining techniques.

Free-air Anomaly from a Consistent Preprocessing of Land Gravity Data in South Korea (우리나라 지상중력자료의 일관된 전처리를 통한 프리에어이상값)

  • Lee, Ji-Sun;Lee, Bo-Mi;Kwon, Jay-Hyoun;Lee, Yong-Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.379-386
    • /
    • 2008
  • To determine the precise geoid, the quality land gravity data as well as the accurate position information of the observation points are required. Here, the land gravity data should be processed in a consistent way from the raw data level producing the quality free-air anomaly being used in the geoid determination. In this study, we processed land gravity data of KIGAM(Korea Institute of Geoscience and Mineral Resources) and Pusan national university which has precise position information acquired from GPS and raw gravity data. The conversion from readings of gravimeter to the gravity value, corrections of instrumental height and tide were carried out from the raw gravity data for each surveying session. Then, a cross-over adjustment was applied to generate a free-air anomaly for whole data with precision of 0.48 mGal. It is expected that the data processed through this study shall be a foundation on the determination of the precise geoid model in Korea.

Looking Beyond the Numbers: Bibliometric Approach to Analysis of LIS Research in Korea

  • Yang, Kiduk;Lee, Jongwook;Choi, Wonchan
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.49 no.4
    • /
    • pp.241-264
    • /
    • 2015
  • Bibliometric analysis for research performance evaluation can generate erroneous assessments for various reasons. Application of the same evaluation metric to different domains, for instance, can produce unfair evaluation results, while analysis based on incomplete data can lead to incorrect conclusions. This study examines bibliometric data of library and information science (LIS) research in Korea to investigate whether research performance should be evaluated in a uniform manner in multi-disciplinary fields such as LIS and how data incompleteness can affect the bibliometric assessment outcomes. The initial analysis of our study data, which consisted of 4,350 citations to 1,986 domestic papers published between 2001 and 2010 by 163 LIS faculty members in Korea, showed an anomalous citation pattern caused by data incompleteness, which was addressed via data projection based on past citation trends. The subsequent analysis of augmented study data revealed ample evidence of bibliometric pattern differences across subject areas. In addition to highlighting the need for a subject-specific assessment of research performance, the study demonstrated the importance of rigorous analysis and careful interpretation of bibliometric data by identifying and compensating for deficiencies in the data source, examining per capita as well as overall statistics, and considering various facets of research in order to interpret what the numbers reflect rather than merely taking them at face value as quantitative measures of research performance.

Automatic 3D soil model generation for southern part of the European side of Istanbul based on GIS database

  • Sisman, Rafet;Sahin, Abdurrahman;Hori, Muneo
    • Geomechanics and Engineering
    • /
    • v.13 no.6
    • /
    • pp.893-906
    • /
    • 2017
  • Automatic large scale soil model generation is very critical stage for earthquake hazard simulation of urban areas. Manual model development may cause some data losses and may not be effective when there are too many data from different soil observations in a wide area. Geographic information systems (GIS) for storing and analyzing spatial data help scientists to generate better models automatically. Although the original soil observations were limited to soil profile data, the recent developments in mapping technology, interpolation methods, and remote sensing have provided advanced soil model developments. Together with advanced computational technology, it is possible to handle much larger volumes of data. The scientists may solve difficult problems of describing the spatial variation of soil. In this study, an algorithm is proposed for automatic three dimensional soil and velocity model development of southern part of the European side of Istanbul next to Sea of Marmara based on GIS data. In the proposed algorithm, firstly bedrock surface is generated from integration of geological and geophysical measurements. Then, layer surface contacts are integrated with data gathered in vertical borings, and interpolations are interpreted on sections between the borings automatically. Three dimensional underground geology model is prepared using boring data, geologic cross sections and formation base contours drawn in the light of these data. During the preparation of the model, classification studies are made based on formation models. Then, 3D velocity models are developed by using geophysical measurements such as refraction-microtremor, array microtremor and PS logging. The soil and velocity models are integrated and final soil model is obtained. All stages of this algorithm are carried out automatically in the selected urban area. The system directly reads the GIS soil data in the selected part of urban area and 3D soil model is automatically developed for large scale earthquake hazard simulation studies.

A Fast Processing Algorithm for Lidar Data Compression Using Second Generation Wavelets

  • Pradhan B.;Sandeep K.;Mansor Shattri;Ramli Abdul Rahman;Mohamed Sharif Abdul Rashid B.
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.49-61
    • /
    • 2006
  • The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the UDAR data compression. A newly developed data compression approach to approximate the UDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become an important research topic for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original UDAR data. The results show that this method can be used for significant reduction of data set.