• Title/Summary/Keyword: common data model

Search Result 1,253, Processing Time 0.03 seconds

Model Algorithms for Estimates of Inhalation Exposure and Comparison between Exposure Estimates from Each Model (흡입 노출 모델 알고리즘의 구성과 시나리오 노출량 비교)

  • Park, Jihoon;Yoon, Chungsik
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.29 no.3
    • /
    • pp.358-367
    • /
    • 2019
  • Objectives: This study aimed to review model algorithms and input parameters applied to some exposure models and to compare the simulated estimates using an exposure scenario from each model. Methods: A total of five exposure models which can estimate inhalation exposure were selected; the Korea Ministry of Environment(KMOE) exposure model, European Centre for Ecotoxicology and Toxicology of Chemicals Targeted Risk Assessment(ECETOC TRA), SprayExpo, and ConsExpo model. Algorithms and input parameters for exposure estimation were reviewed and the exposure scenario was used for comparing the modeled estimates. Results: Algorithms in each model commonly consist of the function combining physicochemical properties, use characteristics, user exposure factors, and environmental factors. The outputs including air concentration ($mg/m^3$) and inhaled dose(mg/kg/day) are estimated applying input parameters with the common factors to the algorithm. In particular, the input parameters needed to estimate are complicated among the models and models need more individual input parameters in addition to common factors. In case of CEM, it can be obtained more detailed exposure estimates separating user's breathing zone(near-field) and those at influencing zone(far-field) by two-box model. The modeled exposure estimates using the exposure scenario were similar between the models; they were ranged from 0.82 to $1.38mg/m^3$ for concentration and from 0.015 to 0.180 mg/kg/day for inhaled dose, respectively. Conclusions: Modeling technique can be used for a useful tool in the process of exposure assessment if the exposure data are scarce, but it is necessary to consider proper input parameters and exposure scenario which can affect the real exposure conditions.

Acoustic Full-waveform Inversion Strategy for Multi-component Ocean-bottom Cable Data (다성분 해저면 탄성파 탐사자료에 대한 음향파 완전파형역산 전략)

  • Hwang, Jongha;Oh, Ju-Won;Lee, Jinhyung;Min, Dong-Joo;Jung, Heechul;Song, Youngsoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.1
    • /
    • pp.38-49
    • /
    • 2020
  • Full-waveform inversion (FWI) is an optimization process of fitting observed and modeled data to reconstruct high-resolution subsurface physical models. In acoustic FWI (AFWI), pressure data acquired using a marine streamer has mainly been used to reconstruct the subsurface P-wave velocity models. With recent advances in marine seismic-acquisition techniques, acquiring multi-component data in marine environments have become increasingly common. Thus, AFWI strategies must be developed to effectively use marine multi-component data. Herein, we proposed an AFWI strategy using horizontal and vertical particle-acceleration data. By analyzing the modeled acoustic data and conducting sensitivity kernel analysis, we first investigated the characteristics of each data component using AFWI. Common-shot gathers show that direct, diving, and reflection waves appearing in the pressure data are separated in each component of the particle-acceleration data. Sensitivity kernel analyses show that the horizontal particle-acceleration wavefields typically contribute to the recovery of the long-wavelength structures in the shallow part of the model, and the vertical particle-acceleration wavefields are generally required to reconstruct long- and short-wavelength structures in the deep parts and over the whole area of a given model. Finally, we present a sequential-inversion strategy for using the particle-acceleration wavefields. We believe that this approach can be used to reconstruct a reasonable P-wave velocity model, even when the pressure data is not available.

A Application Method of Plotting Original Data (도화원도의 활용방안)

  • Lee, Yong-Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.5
    • /
    • pp.441-448
    • /
    • 2011
  • Lately, digital restitution was became common using digital aerial photos. Therefore, we can obtain three-dimensional data. As a plotting-maker is checked by naked eye, plotting original data is very useful for making reliable three-dimensional data including contour and elevation point layers. In this study, we want to make precise and accurate digital elevation model using plotting original data. Contour and elevation point layers was extracted in digital map and break line was extracted in plotting original data. And then, compared both of results. For comparison, we selected slight slope and complex topography area like a residence area, mountain and agricultural land. We extracted break line deleting layer until obtaining ideal digital elevation model. As the results, We could extract contour, elevation points, eight road and two boundary layers using break lines. And We could obtain precise elevation model. Editing break lines, the distortion of digital elevation model could be minimized in the complex and sharp slope area.

Is a General Quality Model of Software Possible: Playability versus Usability?

  • Koh, Seokha;Jiang, Jialei
    • Journal of Information Technology Applications and Management
    • /
    • v.27 no.2
    • /
    • pp.37-50
    • /
    • 2020
  • This paper is very exploratory and addresses the issue 'Is a general quality model of software possible?'. If possible, how specific can/should it be?' ISO 25000 Series SQuaRE is generally regarded as a general quality model which can be applied to most kinds of software. Usability is one of the 8 characteristics of SQuaRE's Product Quality Model. It is the main issue associated with SQuaRE's Quality in Use Model too. it is the most important concept associated software quality since using is the only ultimate goal of software products. Playability, however, is generally regarded as a special type of usability, which can be applied to game software. This common idea contradicts with the idea that SQuaRE is valid for most kinds, at least many kinds, of software. The empirical evidences of this paper show that SQuaRE is too specific to be a general quality model of software.

A dynamic Bayesian approach for probability of default and stress test

  • Kim, Taeyoung;Park, Yousung
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.5
    • /
    • pp.579-588
    • /
    • 2020
  • Obligor defaults are cross-sectionally correlated as obligors share common economic conditions; in addition obligors are longitudinally correlated so that an economic shock like the IMF crisis in 1998 lasts for a period of time. A longitudinal correlation should be used to construct statistical scenarios of stress test with which we replace a type of artificial scenario that the banks have used. We propose a Bayesian model to accommodate such correlation structures. Using 402 obligors to a domestic bank in Korea, our model with a dynamic correlation is compared to a Bayesian model with a stationary longitudinal correlation and the classical logistic regression model. Our model generates statistical financial statement under a stress situation on individual obligor basis so that the genearted financial statement produces a similar distribution of credit grades to when the IMF crisis occurred and complies with Basel IV (Basel Committee on Banking Supervision, 2017) requirement that the credit grades under a stress situation are not sensitive to the business cycle.

Adaptive Reconstruction of Multi-periodic Harmonic Time Series with Only Negative Errors: Simulation Study

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.6
    • /
    • pp.721-730
    • /
    • 2010
  • In satellite remote sensing, irregular temporal sampling is a common feature of geophysical and biological process on the earth's surface. Lee (2008) proposed a feed-back system using a harmonic model of single period to adaptively reconstruct observation image series contaminated by noises resulted from mechanical problems or environmental conditions. However, the simple sinusoidal model of single period may not be appropriate for temporal physical processes of land surface. A complex model of multiple periods would be more proper to represent inter-annual and inner-annual variations of surface parameters. This study extended to use a multi-periodic harmonic model, which is expressed as the sum of a series of sine waves, for the adaptive system. For the system assessment, simulation data were generated from a model of negative errors, based on the fact that the observation is mainly suppressed by bad weather. The experimental results of this simulation study show the potentiality of the proposed system for real-time monitoring on the image series observed by imperfect sensing technology from the environment which are frequently influenced by bad weather.

An Efficient Machine Learning Model for Clinical Support to Predict Heart Disease

  • Rao, B.Vara Prasada;Reddy, B.Satyanarayana;Padmaja, I. Naga;Kumar, K. Ashok
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.223-229
    • /
    • 2022
  • Early detection can help prevent heart disease, which is one of the most common reasons for death. This paper provides a clinical support model for predicting cardiac disease. The model is built using two publicly available data sets. The admissibility and application of the the model are justified by a sequence of tests. Implementation of the model and testing are also discussed

Comparative Analysis and Implications of Command and Control(C2)-related Information Exchange Models (지휘통제 관련 정보교환모델 비교분석 및 시사점)

  • Kim, Kunyoung;Park, Gyudong;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.59-69
    • /
    • 2022
  • For effective battlefield situation awareness and command resolution, information exchange without seams between systems is essential. However, since each system was developed independently for its own purposes, it is necessary to ensure interoperability between systems in order to effectively exchange information. In the case of our military, semantic interoperability is guaranteed by utilizing the common message format for data exchange. However, simply standardizing the data exchange format cannot sufficiently guarantee interoperability between systems. Currently, the U.S. and NATO are developing and utilizing information exchange models to achieve semantic interoperability further than guaranteeing a data exchange format. The information exchange models are the common vocabulary or reference model,which are used to ensure the exchange of information between systems at the content-meaning level. The information exchange models developed and utilized in the United States initially focused on exchanging information directly related to the battlefield situation, but it has developed into the universal form that can be used by whole government departments and related organizations. On the other hand, NATO focused on strictly expressing the concepts necessary to carry out joint military operations among the countries, and the scope of the models was also limited to the concepts related to command and control. In this paper, the background, purpose, and characteristics of the information exchange models developed and used in the United States and NATO were identified, and comparative analysis was performed. Through this, we intend to present implications when developing a Korean information exchange model in the future.

A Forecasting Model of Phytophthora Blight Incidence in Red Pepper and It′s Computer System (고추역병의 예찰모형과 컴퓨터 시스템)

  • 황의홍;이순구
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.3 no.1
    • /
    • pp.16-21
    • /
    • 2001
  • Regression models were obtained on the base of the correlation between Phytophthora blight incidence in red pepper and the microclimate data obtained from automated weather station (AWS) during 1997 and 1998. A computer program (PEPBLIGHT) was constructed based on the model that the R2 value is highest among regression models. This computer program uses the microclimate data from more than one AWS through the common dialogue box easy and it is able provide disease forecasting information. In addition, it could be applied far other diseases and converts the microclimate data of AWS to the input data for Statical Analysis System (SAS). PEPBLIGHT was first developed for the forecasting computer system of red pepper blight in Korea. PEPBLIGHT is operated on the MS Windows, so that it is easy to use.

  • PDF

Improving the Performance of Threshold Bootstrap for Simulation Output Analysis (시뮬레이션 출력분석을 위한 임계값 부트스트랩의 성능개선)

  • Kim, Yun-Bae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.23 no.4
    • /
    • pp.755-767
    • /
    • 1997
  • Analyzing autocorrelated data set is still an open problem. Developing on easy and efficient method for severe positive correlated data set, which is common in simulation output, is vital for the simulation society. Bootstrap is on easy and powerful tool for constructing non-parametric inferential procedures in modern statistical data analysis. Conventional bootstrap algorithm requires iid assumption in the original data set. Proper choice of resampling units for generating replicates has much to do with the structure of the original data set, iid data or autocorrelated. In this paper, a new bootstrap resampling scheme is proposed to analyze the autocorrelated data set : the Threshold Bootstrap. A thorough literature search of bootstrap method focusing on the case of autocorrelated data set is also provided. Theoretical foundations of Threshold Bootstrap is studied and compared with other leading bootstrap sampling techniques for autocorrelated data sets. The performance of TB is reported using M/M/1 queueing model, else the comparison of other resampling techniques of ARMA data set is also reported.

  • PDF