• Title/Summary/Keyword: Data-Integration

Search Result 3,496, Processing Time 0.038 seconds

A Development of Trend Analysis Models and a Process Integrating with GIS for Industrial Water Consumption Using Realtime Sensing Data (실시간 공업용수 추세패턴 모형개발 및 GIS 연계방안)

  • Kim, Seong-Hoon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.3
    • /
    • pp.83-90
    • /
    • 2011
  • The purpose of this study is to develop a series of trend analysis models for industrial water consumption and to propose a blueprint for the integration of the developed models with GIS. For the consumption data acquisition, a real-time sensing technique was adopted. Data were transformed from the field equipments to the management server in every 5 minutes. The data acquired were substituted to a polynomial formula selected. As a result, a series of models were developed for the consumption of each day. A series of validation processes were applied to the developed models and the models were finalized. Then the finalized models were transformed to the average models representing a day's average consumption or an average daily consumption of each month. Demand pattern analyses were fulfilled through the visualization of the finally derived models. It has founded out that the demand patterns show great consistency and, therefore, it is concluded that high probability of demand forecasting for a day or for a season is available. Also proposed is the integration with GIS as an IT tool by which the developed forecasting models are utilized.

FDI Technology Spillover Effect on the Influence of the Innovation Ability (FDI 기술파급효과가 혁신능력에 미치는 영향)

  • Zhang, Guannan;Jung, Yong Woo;Kim, Chul
    • International Area Studies Review
    • /
    • v.15 no.3
    • /
    • pp.451-470
    • /
    • 2011
  • Many countries are committed to absorb foreign direct investments (FDIs). One of the strong motivations is the improvement of innovative capability through the technology spillover of FDI firms. The effect of FDI technology spillover has been widely researched not only on country level, but industry level. With the evolution of globalization and global sourcing of multinational companies, it is necessary to reexamine the relationship between innovation ability of an industry and spillover effect of FDI. This paper investigates the technology spillover effect of FDI on the innovation of Chinese firms. We gathered the data of 34 industries form various sources of Chinese government and the time span is 2001-2008. By using industry level panel data, we set panel data analysis model. In the model, there are two explanatory variables: backward and foreward integration. The analysis result shows that technology spillover of FDI has significant effect on the innovation of foreward integration FDI.

Integration of Extended IFC-BIM and Ontology for Information Management of Bridge Inspection (확장 IFC-BIM 기반 정보모델과 온톨로지를 활용한 교량 점검데이터 관리방법)

  • Erdene, Khuvilai;Kwon, Tae Ho;Lee, Sang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.6
    • /
    • pp.411-417
    • /
    • 2020
  • To utilize building information modeling (BIM) technology at the bridge maintenance stage, it is necessary to integrate large quantities of bridge inspection and model data for object-oriented information management. This research aims to establish the benefits of utilizing the extended industry foundation class (IFC)-BIM and ontology for bridge inspection information management. The IFC entities were extended to represent the bridge objects, and a method of generating the extended IFC-based information model was proposed. The bridge inspection ontology was also developed by extraction and classification of inspection concepts from the AASHTO standard. The classified concepts and their relationships were mapped to the ontology based on the semantic triples approach. Finally, the extended IFC-based BIM model was integrated with the ontology for bridge inspection data management. The effectiveness of the proposed framework for bridge inspection information management by integration of the extended IFC-BIM and ontology was tested and verified by extracting bridge inspection data via the SPARQL query.

Enhancing Acute Kidney Injury Prediction through Integration of Drug Features in Intensive Care Units

  • Gabriel D. M. Manalu;Mulomba Mukendi Christian;Songhee You;Hyebong Choi
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.434-442
    • /
    • 2023
  • The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting.

Mathematical Foundations and Educational Methodology of Data Mining (데이터 마이닝의 수학적 배경과 교육방법론)

  • Lee Seung-Woo
    • Journal for History of Mathematics
    • /
    • v.18 no.2
    • /
    • pp.95-106
    • /
    • 2005
  • This paper is investigated conception and methodology of data selection, cleaning, integration, transformation, reduction, selection and application of data mining techniques, and model evaluation during procedure of the knowledge discovery in database (KDD) based on Mathematics. Statistical role and methodology in KDD is studied as branch of Mathematics. Also, we investigate the history, mathematical background, important modeling techniques using statistics and information, practical applied field and entire examples of data mining. Also we study the differences between data mining and statistics.

  • PDF

Cancer Genomics Object Model: An Object Model for Cancer Research Using Microarray

  • Park, Yu-Rang;Lee, Hye-Won;Cho, Sung-Bum;Kim, Ju-Han
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.29-34
    • /
    • 2005
  • DNA microarray becomes a major tool for the investigation of global gene expression in all aspects of cancer and biomedical research. DNA microarray experiment generates enormous amounts of data and they are meaningful only in the context of a detailed description of microarrays, biomaterials, and conditions under which they were generated. MicroArray Gene Expression Data (MGED) society has established microarray standard for structured management of these diverse and large amount data. MGED MAGE-OM (MicroArray Gene Expression Object Model) is an object oriented data model, which attempts to define standard objects for gene expression. To assess the relevance of DNA microarray analysis of cancer research it is required to combine clinical and genomics data. MAGE-OM, however, does not have an appropriate structure to describe clinical information of cancer. For systematic integration of gene expression and clinical data, we create a new model, Cancer Genomics Object Model.

  • PDF

Implementation of Product Data Management System for CAD Systems by using XML-based Web Service

  • Cho, Jeoung-Sung;Yahya, Bernardo Nugroho
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.05a
    • /
    • pp.245-248
    • /
    • 2004
  • It is certain that the future manufacturing environment will be network-centric and spatially distributed based on Internet. Today, wide variety of distributed computing and communication technologies are available for implementing a system for product data exchange and sharing. One of the technologies that have been received most attentions for product data exchange and sharing is Product Data Management (PDM). PDM tries to integrate and manage process of data and technical documents that are connected to physical product components. In accordance to previous researches about PDM, it can be regarded as an integration tool of many different areas, which ensures that the right information is available to the right person at the right time and in the right form throughout the enterprise. PDM with Web-enabled CAD system is proposed in this paper in order to acknowledge the usefulness of the system mentioned. The system will use Web service on Visual Studio C#.Net to invoke the web application system.

  • PDF

Building A PDM/CE Environment and Validating Integrity Using STEP (STEP을 이용한 PDM/CE환경의 구축과 데이타 무결성 확인)

  • 유상봉;서효원;고굉욱
    • The Journal of Society for e-Business Studies
    • /
    • v.1 no.1
    • /
    • pp.173-194
    • /
    • 1996
  • In order to adapt today's short product life cycle and rapid technology changes., integrated systems should be extended to support PDM (Product Data Management) or CE(Concurrent Engineering). A PDM/CE environment has been developed and a prototype is Presented in this paper. Features of the PDM/CE environment are 1) integrated product information model (IPIM) includes both data model and integrity constraints, 2) database systems are organized hierarchically so that working data C8Mot be referenced by other application systems until they are released into the global database, and 3) integrity constraints written in EXPRESS are validated both in the local databases and the global database. By keeping the integrity of the product data, undesirable propagation of illegal data to other application system can be prevented. For efficient validation, the constraints are distributed into the local and the global schemata. Separate triggering mechanisms are devised using the dependency of constraints to three different data operations, i.e., insertion, deletion, and update.

  • PDF

A development of travel time estimation algorithm fusing GPS probe and loop detector (GPS probe 및 루프 검지기 자료의 융합을 통한 통행시간추정 알고리즘 개발)

  • 정연식;최기주
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.3
    • /
    • pp.97-116
    • /
    • 1999
  • The growing demand for the real time traffic information is bringing about the category and number of traffic collection mechanism in the era of ITS. There are, however, two problems in making data into information using various traffic data. First, the information making process of making data into the representative information, for each traffic collection mechanism, for the specified analysis periods is required. Second, the integration process of fusing each representative information into "the information" for each link out of each source is also required. That is, both data reduction and/or data to information process and information fusion are required. This article is focusing on the development of information fusing algorithm based on voting technique, fuzzy regression, and, Bayesian pooling technique for estimating the dynamic link travel time of networks. The proposed algorithm has been validated using the field experiment data out of GPS probes and detectors over the roadways and the estimated link travel time from the algorithm is proved to be more useful than the mere arithmetic mean from each traffic source.

  • PDF

A Statistical Matching Method with k-NN and Regression

  • Chung, Sung-S.;Kim, Soon-Y.;Lee, Seung-S.;Lee, Ki-H.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.879-890
    • /
    • 2007
  • Statistical matching is a method of data integration for data sources that do not share the same units. It could produce rapidly lots of new information at low cost and decrease the response burden affecting the quality of data. This paper proposes a statistical matching technique combining k-NN (k-nearest neighborhood) and regression methods. We select k records in a donor file that have similarity in value with a specific observation of the common variable in a recipient file and estimate an imputation value for the recipient file, using regression modeling in the donor file. An empirical comparison study is conducted to show the properties of the proposed method.

  • PDF