• Title/Summary/Keyword: common data model

Search Result 1,226, Processing Time 0.025 seconds

Hemodynamic Analysis of Pig's Left Common Coronary Artery (LCCA) (II) (좌주간부 관상동맥(LCCA)에 관한 혈류역학적 분석 (II))

  • Moon, Su-Yeon;Jang, Ju-Hee;Park, Jung-Su;Shin, Seh-Yun
    • Proceedings of the KSME Conference
    • /
    • 2003.04a
    • /
    • pp.2043-2047
    • /
    • 2003
  • The distributions of blood pressure, blood flow, and blow volume in the left common artery (LCCA) were determined using the lumping parameter method. In order to develop a mathematical model for microcirculation in LCCA, the present study adopted preexisted set of measured morphological data on anatomy, mechanical properties of the coronary vessels, viscosity of blood, the basic laws of physics, and the appropriate boundary condition. Pressures and volumes of blood and flow resistance were expressed in terms of electrical voltages, current, and resistances, respectively, in the electrical analog model. The results of two mathematical models, symmetrical and asymmetrical models, were compared with other investigator's data. The present results were in good agreement with previous studies. It was found that the mean pressure profiles were similar in both models.

  • PDF

Unified Systems on Surveying and Geoinformation Management in Korea - New Conceptual Design of Korean NSDI Model - (우리나라 측량·공간정보관리에 관한 통합시스템 연구 - 새로운 국가공간정보기반(NSDI) 모델의 도입 -)

  • Lee, Young-Jin
    • Journal of Cadastre & Land InformatiX
    • /
    • v.44 no.1
    • /
    • pp.179-194
    • /
    • 2014
  • In this study, it aims to research for unified system of "the surveying and geospatial information management" and new National geoSpatial Information Infrastructure(NSDI) as new paradigm against the strategy of "global geospatial information management". The country's existing NGIS projects and the policies of spatial information were examined in this paper, then it was defined newly by modification of NSDI's data coverage with bottom-up method. The new NSDI strategy is based on large scale digital map which was influenced by the local and global trend such as open data, e-Government, Earth observation, etc. (refer to Fig. 1). It was also suggested with new concept of NSDI model that the public-private sharing data can be added to digital map on equal term with spatial core data. (refer to Fig. 2) It is proposed the institutional model of MOLIT(Ministry of Land, Infrastructure and Transport) as new concept of NSDI which was applied(refer to Fig. 4). The new model is improving localization and reinforcing cooperation system with not only the other departments within the MOLIT but also the other ministries(forestry, environment, agriculture, heritage, etc.) from independent operation system as a part informatization of land, infrastructure and transport. At the new SDI institutional model of the MOLIT, the spatial information is reorganized as common data infrastructure for all applications, Goverment 3.0 can be feasible according to common data related to government agencies and local government's data vertically or horizontally. And then, it can be practical strategy model to integrate and link all the map and the register which are managed by the laws and institutions if this unified system as a common data can include all spatial core data(digital map), such as base map data of NGA(national gespatial agency), land data and facility data of local government.

A note on Box-Cox transformation and application in microarray data

  • Rahman, Mezbahur;Lee, Nam-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.967-976
    • /
    • 2011
  • The Box-Cox transformation is a well known family of power transformations that brings a set of data into agreement with the normality assumption of the residuals and hence the response variable of a postulated model in regression analysis. Normalization (studentization) of the regressors is a common practice in analyzing microarray data. Here, we implement Box-Cox transformation in normalizing regressors in microarray data. Pridictabilty of the model can be improved using data transformation compared to studentization.

Study on Development Method of MDMS for AMI Operation based on Common Information Model (CIM 기반 AMI용 미터데이터관리시스템(MDMS) 개발 방안 연구)

  • Jung, Nam-Joon;Jin, Young-Taek;Chae, Chang-Hun;Choi, Min-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.3
    • /
    • pp.171-180
    • /
    • 2012
  • In the development of MDMS(Meter Data Management System) based on CIM(Common Information Model), which is international standard in information model and data exchange on power system, the two focused issues are the effective management of data collected in a shorter time period and the way to integrate services supporting legacy system to use the AMI(AMI, Advanced Metering Infrastructure) data. In this paper, we propose MDMS implementation methods and functions in AMI environment which are differ from existing AMR system environments in that the methods support bi-directional service infrastructure. The proposed MDMS in this paper has two unique features, one is the secure of interoperability by utilizing the CIM and ESB, the other is the improvement of field application by implementing system module based on components. On an implementation of smart grid, the result of proposed methods is expected to contribute to the efficient development and operation of CIM-based power system.

Construction of Artificial Intelligence Training Platform for Multi-Center Clinical Research (다기관 임상연구를 위한 인공지능 학습 플랫폼 구축)

  • Lee, Chung-Sub;Kim, Ji-Eon;No, Si-Hyeong;Kim, Tae-Hoon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.239-246
    • /
    • 2020
  • In the medical field where artificial intelligence technology is introduced, research related to clinical decision support system(CDSS) in relation to diagnosis and prediction is actively being conducted. In particular, medical imaging-based disease diagnosis area applied AI technologies at various products. However, medical imaging data consists of inconsistent data, and it is a reality that it takes considerable time to prepare and use it for research. This paper describes a one-stop AI learning platform for converting to medical image standard R_CDM(Radiology Common Data Model) and supporting AI algorithm development research based on the dataset. To this, the focus is on linking with the existing CDM(common data model) and model the system, including the schema of the medical imaging standard model and report information for multi-center research based on DICOM(Digital Imaging and Communications in Medicine) tag information. And also, we show the execution results based on generated datasets through the AI learning platform. As a proposed platform, it is expected to be used for various image-based artificial intelligence researches.

Association Between Persistent Treatment of Alzheimer's Dementia and Osteoporosis Using a Common Data Model

  • Seonhwa Hwang;Yong Gwon Soung;Seong Uk Kang;Donghan Yu;Haeran Baek;Jae-Won Jang
    • Dementia and Neurocognitive Disorders
    • /
    • v.22 no.4
    • /
    • pp.121-129
    • /
    • 2023
  • Background and Purpose: As it becomes an aging society, interest in senile diseases is increasing. Alzheimer's dementia (AD) and osteoporosis are representative senile diseases. Various studies have reported that AD and osteoporosis share many risk factors that affect each other's incidence. This aimed to determine if active medication treatment of AD could affect the development of osteoporosis. Methods: The Health Insurance Review and Assessment Service provided data consisting of diagnosis, demographics, prescription drug, procedures, medical materials, and healthcare resources. In this study, data of all AD patients in South Korea who were registered under the national health insurance system were obtained. The cohort underwent conversion to an Observational Medical Outcomes Partnership-Common Data Model version 5 format. Results: This study included 11,355 individuals in the good persistent group and an equal number of 11,355 individuals in the poor persistent group from the National Health Claims database for AD drug treatment. In primary analysis, the risk of osteoporosis was significantly higher in the poor persistence group than in the good persistence group (hazard ratio, 1.20 [95% confidence interval, 1.09-1.32]; p<0.001). Conclusions: We found that the good persistence group treated with anti-dementia drugs for AD was associated with a significant lower risk of osteoporosis in this nationwide study. Further studies are needed to clarify the pathophysiological link in patients with two chronic diseases.

Estimating Heterogeneous Customer Arrivals to a Large Retail store : A Bayesian Poisson model perspective (대형할인매점의 요일별 고객 방문 수 분석 및 예측 : 베이지언 포아송 모델 응용을 중심으로)

  • Kim, Bumsoo;Lee, Joonkyum
    • Korean Management Science Review
    • /
    • v.32 no.2
    • /
    • pp.69-78
    • /
    • 2015
  • This paper considers a Bayesian Poisson model for multivariate count data using multiplicative rates. More specifically we compose the parameter for overall arrival rates by the product of two parameters, a common effect and an individual effect. The common effect is composed of autoregressive evolution of the parameter, which allows for analysis on seasonal effects on all multivariate time series. In addition, analysis on individual effects allows the researcher to differentiate the time series by whatevercharacterization of their choice. This type of model allows the researcher to specifically analyze two different forms of effects separately and produce a more robust result. We illustrate a simple MCMC generation combined with a Gibbs sampler step in estimating the posterior joint distribution of all parameters in the model. On the whole, the model presented in this study is an intuitive model which may handle complicated problems, and we highlight the properties and possible applications of the model with an example, analyzing real time series data involving customer arrivals to a large retail store.

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

Study on HIPAA PHI application method to protect personal medical information in OMOP CDM construction (OMOP CDM 구축 시 개인의료정보 보호를 위한 HIPAA PHI 적용 방법 연구)

  • Kim, Hak-Ki;Jung, Eun-Young;Park, Dong-Kyun
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.6
    • /
    • pp.66-76
    • /
    • 2017
  • In this study, we investigated how to protect personal healthcare information when constructing OMOP (Observational Medical Outcomes Partnership) CDM (Common Data Model). There are two proposed methods; to restrict data corresponding to HIPAA (Health Insurance Portability and Accountability Act) PHI (Protected Health Information) to be extracted to CDM or to disable identification of it. While processing sensitive information is restricted by Korean Personal Information Protection Act and medical law, there is no clear regulation about what is regarded as sensitive information. Therefore, it was difficult to select the sensitive information for protecting personal healthcare information. In order to solve this problem, we defined HIPAA PHI as restriction criterion of Article 23 of the Personal Information Protection Act and maps data corresponding to CDM data. Through this study, we expected that it will contribute to the spread of CDM construction in Korea as providing solutions to the problem of protection of personal healthcare information generated during CDM construction.

A Study on the Ship Cargo Hold Structure Data Model Based on STEP (STEP을 근거로 한 선체화물창부 구조 데이터 모델에 관한 연구)

  • 박광필;이규열;조두연
    • Korean Journal of Computational Design and Engineering
    • /
    • v.4 no.4
    • /
    • pp.381-390
    • /
    • 1999
  • In this study, a pseudo ship structure data model for the :.hip cargo hold structure based on STEP is proposed. The proposed data model is based on Application Reference Model of AP218 Ship Structure which is the model that specifies conceptual structures and constraints used to describe the information requirements of an application. And the proposeddata model refers the Ship Common Model framework for the model architecture which is the basis for ongoing ship AP development within the ISO ship-building group and the ship product definition information model of CSDP research project for analyzing the relationship between ship structure model entities. The proposed data model includes Space, Compartment. Ship Structural System, Structural Part and Structural Feature of cargo hold. To generate this data model schema in EXPRESS format, ‘GX-Converter’was used which enables user to edit a model in EXPRESS format and convert schema file in EXPRESS format. Using this model schema, STEP physical file containing design data for ship cargo hold data structure was generated through SDAI programming. The another STEP physical file was also generated containing geometry data of ship cargo hold which was extracted and calculated by SDAI and external surface/surface intersection program. The geometry information of ship cargo hold can be then transferred to commercial CAD system, for example, Pro/Engineer. Examples of the modification of the design information are also Presented.

  • PDF