• Title/Summary/Keyword: Data-Integration

Search Result 3,496, Processing Time 0.037 seconds

Development of an Agricultural Data Middleware to Integrate Multiple Sensor Networks for an Farm Environment Monitoring System

  • Kim, Joonyong;Lee, Chungu;Kwon, Tae-Hyung;Park, Geonhwan;Rhee, Joong-Yong
    • Journal of Biosystems Engineering
    • /
    • v.38 no.1
    • /
    • pp.25-32
    • /
    • 2013
  • Purpose: The objective of this study is to develop a data middleware for u-IT convergence in agricultural environment monitoring, which can support non-standard data interfaces and solve the compatibility problems of heterogenous sensor networks. Methods: Six factors with three different interfaces were chosen as target data among the environmental monitoring factors for crop cultivation. PostgresSQL and PostGIS were used for database and the data middleware was implemented by Python programming language. Based on hierarchical model design and key-value type table design, the data middleware was developed. For evaluation, 2,000 records of each data access interface were prepared. Results: Their execution times of File I/O interface, SQL interface and HTTP interface were 0.00951 s/record, 0.01967 s/record and 0.0401 s/record respectively. And there was no data loss. Conclusions: The data middleware integrated three heterogenous sensor networks with different data access interfaces.

A Study on Light-weight Algorithm of Large scale BIM data for Visualization on Web based GIS Platform (웹기반 GIS 플랫폼 상 가시화 처리를 위한 대용량 BIM 데이터의 경량화 알고리즘 제시)

  • Kim, Ji Eun;Hong, Chang Hee
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.41-48
    • /
    • 2015
  • BIM Technology contains data from the life cycle of facility through 3D modeling. For these, one building products the huge file because of massive data. One of them is IFC which is the standard format, and there are issues that large scale data processing based on geometry and property information of object. It increases the rendering speed and constitutes the graphic card, so large scale data is inefficient for screen visualization to user. The light weighting of large scale BIM data has to solve for process and quality of program essentially. This paper has been searched and confirmed about light weight techniques from domestic and abroad researches. To control and visualize the large scale BIM data effectively, we proposed and verified the technique which is able to optimize the BIM character. For operating the large scale data of facility on web based GIS platform, the quality of screen switch from user phase and the effective memory operation were secured.

An XML Structure Translation System using Schema Structure Data Mapping (스키마 구조 데이타 매핑을 이용한 XML 구조변환 시스템)

  • 송종철;김창수;정회경
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.5
    • /
    • pp.406-418
    • /
    • 2004
  • Last days, various kinds of applications and system were individually introduced into specific groups or enterprises by different objective without considering interoperability among those. However, the environment for data processing is changing rapidly in these days. And now the necessity is growing to integrate and couple applications and system in the process dimension for more flexible and quicker data processing on these application programs and system. When integrating these application programs or system, an integration based on XML is recommended as it is one of good methods which will the additional cost and satisfy the requirements of the integration. This is because the XML is not only device-independent data type which can be used any platform, but also it uses XSLT, the document conversion standard established by W3C, which allows easy data conversion from one to another type on occasion of demands. This paper studies a design and implementation of system to convert XML structure. This system shows the structure of source- side providing data and destination-side processing data with using XML schema that defines structural information of a XML document. And this system defines the structure relationship of desired form as mapping structural information and data. This system creates the XSLT document that defines conversion rule between two structures based information which is defined. The XSLT document which is created as described above will convert data to be appropriate to the structure of the destination- side. By implementing this system, it is able to apply a document into various kinds of structure without considering specific system or platform and it is able to construct XSLT document to which meaning of desired form can be given. This paper aims to offer a process conversion between documents and to improve interoperability and scalability, so that we can contribute to build XML document processing environment

A Study of Integrating ASP Databases with Customer Databases (ASP 용의 데이터베이스와 고객 데이터베이스 연동에 관한 연구)

  • Kim, Ho-Yoon;Lee, Jae-Won
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1063-1072
    • /
    • 2004
  • In the ASP(Application Service Provider) business, applications using database sometimes require some data from clients' databases. These days such data are extracted from client database using manual database operations as an EXCEL file and the ASP, once receiving this file, transfers it into the application's database using manual database operations. This paper describes how to deal with data transmitting between the client database and ASP database on the web without using database manual operations for data extraction and insertion. We propose a framework which enables to transmit client data in a systematical way, to match different attribute names of each database for sharing same attribute values, and to avoid exposing information about the network path of client database to the ASP. This approach consists of two steps of data processing. The first is extracting data from client database as XML format by using a downloaded client program from ASP site, the second is uploading and storing the XML file into the ASP database. The implemented prototype system shows the suggested data integration paradigm is valid and ASP business needing integration of client database can be activated using it.

Plant Species Richness in Korea Utilizing Integrated Biological Survey Data (생물기초조사 통합자료를 활용한 우리나라 식물종 풍부도 분석)

  • Seungbum Hong;Jieun Oh;Jaegyu Cha;Kyungeun Lee
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.4
    • /
    • pp.363-374
    • /
    • 2023
  • The limitation in deriving the species richness representing the entire country of South Korea lies in its relatively short history of species field observations and the scattered observation data, which has been collected by various organizations in different fields. In this study, a comprehensive compilation of the observation data for plants held by agencies under the Ministry of Environment was conducted, enabling the construction of a time series dataset spanning over 100 years. The data integration was carried out using minimal criteria such as species name, observed location, and time (year) followed by data verification and correction processes. Based on the integrated plant species data, the comprehensive collection of plant species in South Korea has occurred predominantly since 2000, and the number of plant species explored through these surveys appears to be converging recently. The collection of species survey data necessary for deriving national-level biodiversity information has recently begun to meet the necessary conditions. Applying the Chao 2 method, the species richness of indigenous plants estimated at 3,182.6 for the 70-year period since 1951. A minimum cumulative period of 7 years is required for this estimation. This plant species richness from this study can be a baseline to study future changes in species richness in South Korea. Moreover, the integrated data with the estimation method for species richness used in this study appears to be applicable to derive regional biodiversity indices such as for local government units as well.

Data Quality Management: Operators and a Matching Algorithm with a CRM Example (데이터 품질 관리 : CRM을 사례로 연산자와 매칭기법 중심)

  • 심준호
    • The Journal of Society for e-Business Studies
    • /
    • v.8 no.3
    • /
    • pp.117-130
    • /
    • 2003
  • It is not unusual to observe that there Is a great amount of redundant or inconsistent data even within an e-business system such as CRM(Customer Relationship Management) system. This problem becomes aggravate when we construct a system of which information are gathered from different sources. Data quality management is indeed needed to avoid any possible redundant or inconsistent data in such information system. A data quality process, in general, consists of three phases: data cleaning (scrubbing), matching, and integration phase. In this paper, we introduce and categorize data quality operators for each phase. Then, we describe our distance function used in the matching phase, and present a matching algorithm PRIMAL (a PRactical Matching Algorithm). And finally, we present a related work and future research.

  • PDF

Design and Implementation of multi-dimensional BI System for Information Integration and Analysis in University Administration (대학 행정의 정보통합 및 통계분석을 위한 다차원 BI 시스템의 설계 및 구현)

  • Ji, Keung-yeup;Yang, Hee Sung;Kwon, Youngmi
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.5
    • /
    • pp.939-947
    • /
    • 2016
  • As the number of legacy database systems and the size of data to manipulate have been vastly increased, it has become more difficult and complex to analyze characteristics of data. To improve the efficiency of data analysis and help administrators to make decisions in business life, BI(Business Intelligence) system is used. To construct data warehouse and cube from legacy database systems makes it easy and fast to transform raw data into integrated and categorized meaningful information. In this paper, we built a BI system for an University administration. Several source system databases were integrated to data warehouse to build data cubes. The implemented BI system shows much faster data analysis and reporting ability than the manipulation in legacy systems. It is especially efficient in multi dimensional data analysis, nonetheless in single dimensional analysis.

Design and Implementation of Seismic Data Acquisition System using MEMS Accelerometer (MEMS형 가속도 센서를 이용한 지진 데이터 취득 시스템의 설계 및 구현)

  • Choi, Hun;Bae, Hyeon-Deok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.6
    • /
    • pp.851-858
    • /
    • 2012
  • In this paper, we design a seismic data acquisition system(SDAS) and implement it. This system is essential for development of a noble local earthquake disaster preventing system in population center. In the system, we choose a proper MEMS-type triaxial accelerometer as a sensor, and FPGA and ARM processor are used for implementing the system. In the SDAS, each module is realized by Verilog HDL and C Language. We carry out the ModelSim simulation to verify the performances of important modules. The simulation results show that the FPGA-based data acquisition module can guarantee an accurate time-synchronization for the measured data from each axis sensor. Moreover, the FPGA-ARM based embedded technology in system hardware design can reduce the system cost by the integration of data logger, communication sever, and facility control system. To evaluate the data acquisition performance of the SDAS, we perform experiments for real seismic signals with the exciter. Performances comparison between the acquired data of the SDAS and the reference sensor shows that the data acquisition performance of the SDAS is valid.

Integrated Analysis of Gravity and MT data by Geostatistical Approach (지구통계학적 방법을 이용한 포텐셜 자료와 MT 자료의 복합 해석 연구)

  • Park, Gye-Soon;Oh, Seok-Hoon;Lee, Heui-Soon;Kwon, Byung-Doo;Yang, Jun-Mo
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.42-47
    • /
    • 2007
  • We have studied feasibility of the geostatistical approach to enhance the result of analysis of the sparsely obtained MT(Magnetotelluric) data by combining with gravity data. We have attempted to use geostatistics for integrating the MT data along with gravity data. To evaluate the feasibility of this approach, we have studied about interrelation between geological boundary and density distribution, and corrected density distribution for conversion to more sensitive to geological boundary by minimization of difference between z-directional variogram values of resistivity distribution obtained MT inversion and density distributions. Then, this method has been tested on model and field data. In model test, the results obtained were good agreement with real model. And in a real field data, the result of analysis demonstrate convincingly that our geostatistical approach is effective.

  • PDF

A Development on Reliability Data Integration Program (신뢰도 데이터 합성 program의 개발)

  • Rhie, Kwang-Won;Park, Moon-Hi;Oh, Shin-Kyu;Han, Jeong-Min
    • Journal of the Korean Society of Safety
    • /
    • v.18 no.4
    • /
    • pp.164-168
    • /
    • 2003
  • Bayes theorem, suggested by the British Mathematician Bayes (18th century), enables the prior estimate of the probability of an event under the condition given by a specific This theorem has been frequently used to revise the failure probability of a component or system. 2-Stage Bayesian procedure was firstly published by Shultis et al. (1981) and Kaplan (1983), and was further developed based on the studies of Hora & Iman (1990) Papazpgolou et al., Porn(1993). For a small observed failure number (below 12), the estimated reliability of a system or component is not reliable. In the case in which the reliability data of the corresponding system or component can be found in a generic reliability reference book, however, a reliable estimation of the failure probability can be realized by using Bayes theorem, which jointly makes use of the observed data (specific data) and the data found in reference book (generic data).