• Title/Summary/Keyword: Data-Integration

Search Result 3,496, Processing Time 0.035 seconds

Concept and Structure of Parametric Object Breakdown Structure (OBS) for Practical BIM (BIM 객체분류체계 (OBS) 개념 및 구조)

  • Jung, Youngsoo;Kim, Yesol;Kim, Min;Ju, Taehwan
    • Korean Journal of Construction Engineering and Management
    • /
    • v.14 no.3
    • /
    • pp.88-96
    • /
    • 2013
  • Recent proliferation of building information modeling (BIM) has actively stimulated integrated utilization of geometric (graphic) and non-geometric (non-graphic) data. Nevertheless, physically and logically, linking and maintaining these two different types of data in an integrated manner requires enormous overhead efforts for practical implementation. In order to address this problem, this paper proposes a concept and structure of the object breakdown structure (OBS) that facilitates advanced BIM implementations in an automated and effective manner. Proposed OBS numbering system has secure rules for organizing graphic objects in full considerations of effectively integrating with non-geometric data (e.g. cost and schedule). It also removes repetitive linking process caused by design changes or modifications. The result of applying this concept to a housing project revealed that only 120 definitions controled over 6,000 graphic objects for full integration with cost and schedule functions.

A Study on the Digital Electronic Compass by Integration of GPS Receiver and Earth's Magnetic Field Sensor (GPS수신기와 지자기센서 병행식 디지털 전자콤파스에 대한 연구)

  • Yun, Jae-Jun;Park, Gyei-Kark;Choi, Jo-Cheon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.168-172
    • /
    • 2005
  • An autopilot system of a ship is very important for a safe and convenient navigation, which is realized with getting an azimuth data from a gyrocompass, a magnetic compass and a GPS(Global Positioning System) compass. Magnetic compass an azimuth error is generated by a vessel magnetism material such as steels. The magnetic pole is detected by the magnetic field sensor, it does not coincide with the true north, therefore, the detected azimuth data can not but accompany error. In this paper, in order to detect the minimum change of azimuth data which generates errors of azimuth information, a search algorithm using the Kalman Filtering method is utilized. The digital electronic compass is designed with the integration algorithm using the merits of an earth's magnetic field sensor and a GPS receiver.

  • PDF

Measuring Technique For Acoustic Roughness of Rail Surface With Homogeneous Displacement Sensors (동일 변위센서를 사용한 레일표면 음향조도의 측정방법)

  • Jeong, Wootae;Jang, Seungho;Kho, Hyo-In
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7941-7948
    • /
    • 2015
  • Rolling noises during train operation are caused by vibration excited from irregularities of surface roughness between wheel and rail. Therefore, a proper measurement and analysis techniques of acoustic roughness between wheel and rail surface are required for transmission, prediction, and analysis of the train rolling noise. However, since current measuring devices and methods use trolley-based manual handling devices, the measurements induce unstable measuring speed and vibrational interface that increases errors and disturbances. In this paper, a new automatic rail surface exploring platform with a speed controller has been developed for improving measurement accuracy and reducing inconsistency of measurements. In addition, we propose a data integration method of the rail surface roughness with multiple homogeneous displacement sensors and verified the accuracy of the integrated data through standard test-bed railway track investigation.

A Comparison of Data Extraction Techniques and an Implementation of Data Extraction Technique using Index DB -S Bank Case- (원천 시스템 환경을 고려한 데이터 추출 방식의 비교 및 Index DB를 이용한 추출 방식의 구현 -ㅅ 은행 사례를 중심으로-)

  • 김기운
    • Korean Management Science Review
    • /
    • v.20 no.2
    • /
    • pp.1-16
    • /
    • 2003
  • Previous research on data extraction and integration for data warehousing has concentrated mainly on the relational DBMS or partly on the object-oriented DBMS. Mostly, it describes issues related with the change data (deltas) capture and the incremental update by using the triggering technique of active database systems. But, little attention has been paid to data extraction approaches from other types of source systems like hierarchical DBMS, etc. and from source systems without triggering capability. This paper argues, from the practical point of view, that we need to consider not only the types of information sources and capabilities of ETT tools but also other factors of source systems such as operational characteristics (i.e., whether they support DBMS log, user log or no log, timestamp), and DBMS characteristics (i.e., whether they have the triggering capability or not, etc), in order to find out appropriate data extraction techniques that could be applied to different source systems. Having applied several different data extraction techniques (e.g., DBMS log, user log, triggering, timestamp-based extraction, file comparison) to S bank's source systems (e.g., IMS, DB2, ORACLE, and SAM file), we discovered that data extraction techniques available in a commercial ETT tool do not completely support data extraction from the DBMS log of IMS system. For such IMS systems, a new date extraction technique is proposed which first creates Index database and then updates the data warehouse using the Index database. We illustrates this technique using an example application.

Best Use of the Measured Earthquake Data (지진관측자료의 효과적인 활용에 관한 고찰)

  • 연관희;박동희;김성주;최원학;장천중
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2001.09a
    • /
    • pp.36-43
    • /
    • 2001
  • In Korea, we are absolutely short of earthquake data in good quality from moderate and large earthquakes, which are needed fur the study of strong ground motion characteristics. This means that the best use of the available data is needed far the time being. In this respect, several methods are suggested in this paper, which can be applied in the process of data selection and analysis. First, it is shown that the calibration status of seismic stations can be easily checked by comparing the spectra from accelerometer and velocity sensor both of which are located at the same location. Secondly, it is recommended that S/N ratio in the frequency domain should be checked before discarding the data by only look of the data in time domain. Thirdly, the saturated earthquake data caused by ground motion level exceeding the detection limit of a seismograph are considered to see if such data can be used for spectrum analysis by performing numerical simulation. The result reveals that the saturated data can still be used within the dominant frequency range according to the levels of saturation. Finally, a technique to minimize the window effect that distorts the low frequency spectrum is suggested. This technique involves detrending in displacement domain once the displacement data are obtained by integration of low frequency components of the original data in time domain. Especially, the low frequency component can be separated by using discrete wavelet transform among many alternatives. All of these methods mentioned above may increase the available earthquake data and frequency range.

  • PDF

TMA-OM(Tissue Microarray Object Model)과 주요 유전체 정보 통합

  • Kim Ju-Han
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2006.02a
    • /
    • pp.30-36
    • /
    • 2006
  • Tissue microarray (TMA) is an array-based technology allowing the examination of hundreds of tissue samples on a single slide. To handle, exchange, and disseminate TMA data, we need standard representations of the methods used, of the data generated, and of the clinical and histopathological information related to TMA data analysis. This study aims to create a comprehensive data model with flexibility that supports diverse experimental designs and with expressivity and extensibility that enables an adequate and comprehensive description of new clinical and histopathological data elements. We designed a Tissue Microarray Object Model (TMA-OM). Both the Array Information and the Experimental Procedure models are created by referring to Microarray Gene Expression Object Model, Minimum Information Specification For In Situ Hybridization and Immunohistochemistry Experiments (MISFISHIE), and the TMA Data Exchange Specifications (TMA DES). The Clinical and Histopathological Information model is created by using CAP Cancer Protocols and National Cancer Institute Common Data Elements (NCI CDEs). MGED Ontology, UMLS and the terms extracted from CAP Cancer Protocols and NCI CDEs are used to create a controlled vocabulary for unambiguous annotation. We implemented a web-based application for TMA-OM, supporting data export in XML format conforming to the TMA DES or the DTD derived from TMA-OM. TMA-OM provides a comprehensive data model for storage, analysis and exchange of TMA data and facilitates model-level integration of other biological models.

  • PDF

Bias Correction of Satellite-Based Precipitation Using Convolutional Neural Network

  • Le, Xuan-Hien;Lee, Gi Ha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.120-120
    • /
    • 2020
  • Spatial precipitation data is one of the essential components in modeling hydrological problems. The estimation of these data has achieved significant achievements own to the recent advances in remote sensing technology. However, there are still gaps between the satellite-derived rainfall data and observed data due to the significant dependence of rainfall on spatial and temporal characteristics. An effective approach based on the Convolutional Neural Network (CNN) model to correct the satellite-derived rainfall data is proposed in this study. The Mekong River basin, one of the largest river system in the world, was selected as a case study. The two gridded precipitation data sets with a spatial resolution of 0.25 degrees used in the CNN model are APHRODITE (Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation) and PERSIANN-CDR (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks). In particular, PERSIANN-CDR data is exploited as satellite-based precipitation data and APHRODITE data is considered as observed rainfall data. In addition to developing a CNN model to correct the satellite-based rain data, another statistical method based on standard deviations for precipitation bias correction was also mentioned in this study. Estimated results indicate that the CNN model illustrates better performance both in spatial and temporal correlation when compared to the standard deviation method. The finding of this study indicated that the CNN model could produce reliable estimates for the gridded precipitation bias correction problem.

  • PDF

Empirical Investigation of User Behavior for Financial Mydata: The Moderating Effects of Organizational Information Transparency and Data Security Policy (금융마이데이터 사용자 행동에 관한 실증 연구: 기관정보투명성, 데이터 보안정책의 조절효과)

  • Sohn, Chang Yong;Park, Hyun Sun;Kim, Sang Hyun
    • The Journal of Information Systems
    • /
    • v.32 no.3
    • /
    • pp.85-116
    • /
    • 2023
  • Purpose The importance of data as a key resource of the intelligence revolution is being highlighted, among all those phenomena MyData is attracting attention as a key concept by organizations and individuals that eventually leads the data economy. In this regard, this study was started to contribute to the successful settlement and continuous growth of the domestic MyData industry, which has just entered the system. Design/methodology/approach To develop and test all proposed casual relationships within the research model, we used the Value-Attitude-Behavior(VAB) model as a basic framework. A total of 385 copies were used for the final analysis, and for SPSS 25.0, MS-Excel 2016, and AMOS 24.0 to summarize respondent demographic characteristics, measurement model, and structural model. Findings Findings show that all proposed hypotheses were supported with the exception of the moderating effect of organizational information transparency between data controllability and perceived value, and between data controllability and attitude toward MyData service.

A NoSQL data management infrastructure for bridge monitoring

  • Jeong, Seongwoon;Zhang, Yilan;O'Connor, Sean;Lynch, Jerome P.;Sohn, Hoon;Law, Kincho H.
    • Smart Structures and Systems
    • /
    • v.17 no.4
    • /
    • pp.669-690
    • /
    • 2016
  • Advances in sensor technologies have led to the instrumentation of sensor networks for bridge monitoring and management. For a dense sensor network, enormous amount of sensor data are collected. The data need to be managed, processed, and interpreted. Data management issues are of prime importance for a bridge management system. This paper describes a data management infrastructure for bridge monitoring applications. Specifically, NoSQL database systems such as MongoDB and Apache Cassandra are employed to handle time-series data as well the unstructured bridge information model data. Standard XML-based modeling languages such as OpenBrIM and SensorML are adopted to manage semantically meaningful data and to support interoperability. Data interoperability and integration among different components of a bridge monitoring system that includes on-site computers, a central server, local computing platforms, and mobile devices are illustrated. The data management framework is demonstrated using the data collected from the wireless sensor network installed on the Telegraph Road Bridge, Monroe, MI.

XML-Based Network Services for Real-Time Process Data (실시간 공정 데이터를 위한 XML 기반 네트워크 서비스)

  • Choo, Young-Yeol;Song, Myoung-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.184-190
    • /
    • 2008
  • This paper describes a message model based on XML (eXtensible Markup Language) to present real-time data from sensors and instruments at manufacturing processes for web service. HTML (Hyper Text Markup Language) is inadequate for describing real-time data from process control plants while it is suitable for displaying non-real-time multimedia data on web. For XML-based web service of process data, XML format for the data presentation was proposed after investigating data of various instruments at steel-making plants. Considering transmission delay inevitably caused from increased message length and processing delay from transformation of raw data into defined format, which was critical for operation of a real-time system, its performance was evaluated by simulation. In the simulation, we assumed two implementation models for conducting the transformation function. In one model, transformation was done at an SCC (Supervisory Control Computer) after receiving real-time data from instruments. In the other model, transformation had been carried out at instruments before the data were transmitted to the SCC. Various tests had been conducted under different conditions of offered loads and data lengths and their results were described.