• Title/Summary/Keyword: data based model

Search Result 20,662, Processing Time 0.051 seconds

Predicting Oxynitrification layer using AI-based Varying Coefficient Regression model (AI 기반의 Varying Coefficient Regression 모델을 이용한 산질화층 예측)

  • Hye Jung Park;Joo Yong Shim;Kyong Jun An;Chang Ha Hwang;Je Hyun Han
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.36 no.6
    • /
    • pp.374-381
    • /
    • 2023
  • This study develops and evaluates a deep learning model for predicting oxide and nitride layers based on plasma process data. We introduce a novel deep learning-based Varying Coefficient Regressor (VCR) by adapting the VCR, which previously relied on an existing unique function. This model is employed to forecast the oxide and nitride layers within the plasma. Through comparative experiments, the proposed VCR-based model exhibits superior performance compared to Long Short-Term Memory, Random Forest, and other methods, showcasing its excellence in predicting time series data. This study indicates the potential for advancing prediction models through deep learning in the domain of plasma processing and highlights its application prospects in industrial settings.

Comparison and Analysis of Library RFID Data Model for Major National Standards (주요 국가별 표준 도서관 RFID 데이터 모델의 비교 및 분석)

  • Choi, Jae-Hwang
    • Journal of Korean Library and Information Science Society
    • /
    • v.40 no.2
    • /
    • pp.87-110
    • /
    • 2009
  • This study examined and compared existing national library RFID data models, especially for Denmark, Finland, Netherlands, France, the U.S., Australia and South Korea. Four European country models(i.e., Danish, Finnish, Dutch, and French models) and South Korea use prescriptive data model(fixed encoding approach), while The U.S. and Australia adopt object-based data model, which is based on the data encoding rules of ISO/IEC 15962. This study expects to allow fertile ground for discussion on RFID data models in South Korean library environment.

  • PDF

Joint HGLM approach for repeated measures and survival data

  • Ha, Il Do
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.1083-1090
    • /
    • 2016
  • In clinical studies, different types of outcomes (e.g. repeated measures data and time-to-event data) for the same subject tend to be observed, and these data can be correlated. For example, a response variable of interest can be measured repeatedly over time on the same subject and at the same time, an event time representing a terminating event is also obtained. Joint modelling using a shared random effect is useful for analyzing these data. Inferences based on marginal likelihood may involve the evaluation of analytically intractable integrations over the random-effect distributions. In this paper we propose a joint HGLM approach for analyzing such outcomes using the HGLM (hierarchical generalized linear model) method based on h-likelihood (i.e. hierarchical likelihood), which avoids these integration itself. The proposed method has been demonstrated using various numerical studies.

Crime amount prediction based on 2D convolution and long short-term memory neural network

  • Dong, Qifen;Ye, Ruihui;Li, Guojun
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.208-219
    • /
    • 2022
  • Crime amount prediction is crucial for optimizing the police patrols' arrangement in each region of a city. First, we analyzed spatiotemporal correlations of the crime data and the relationships between crime and related auxiliary data, including points-of-interest (POI), public service complaints, and demographics. Then, we proposed a crime amount prediction model based on 2D convolution and long short-term memory neural network (2DCONV-LSTM). The proposed model captures the spatiotemporal correlations in the crime data, and the crime-related auxiliary data are used to enhance the regional spatial features. Extensive experiments on real-world datasets are conducted. Results demonstrated that capturing both temporal and spatial correlations in crime data and using auxiliary data to extract regional spatial features improve the prediction performance. In the best case scenario, the proposed model reduces the prediction error by at least 17.8% and 8.2% compared with support vector regression (SVR) and LSTM, respectively. Moreover, excessive auxiliary data reduce model performance because of the presence of redundant information.

A Study on Deep Learning Model-based Object Classification for Big Data Environment

  • Kim, Jeong-Sig;Kim, Jinhong
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.59-66
    • /
    • 2021
  • Recently, conceptual information model is changing fast, and these changes are coming about as a result of individual tendency, social cultural, new circumstances and societal shifts within big data environment. Despite the data is growing more and more, now is the time to commit ourselves to the development of renewable, invaluable information of social/live commerce. Because we have problems with various insoluble data, we propose about deep learning prediction model-based object classification in social commerce of big data environment. Accordingly, it is an increased need of social commerce platform capable of handling high volumes of multiple items by users. Consequently, responding to rapid changes in users is a very significant by deep learning. Namely, promptly meet the needs of the times, and a widespread growth in big data environment with the goal of realizing in this paper.

Design of a Model to Structure Longitudinal Data for Medical Education Based on the I-E-O Model (I-E-O 모형에 근거한 의학교육 종단자료 구축을 위한 모형 설계)

  • Jung, Hanna;Lee, I Re;Kim, Hae Won;An, Shinki
    • Korean Medical Education Review
    • /
    • v.24 no.2
    • /
    • pp.156-171
    • /
    • 2022
  • The purpose of this study was to establish a model for constructing longitudinal data for medical school, and to structure cohort and longitudinal data using data from Yonsei University College of Medicine (YUCM) according to the established input-environment-output (I-E-O) model. The study was conducted according to the following procedure. First, the data that YUCM has collected was reviewed through data analysis and interviews with the person in charge of each questionnaire. Second, the opinions of experts on the validity of the I-E-O model were collected through the first expert consultation, and as a result, a model was established for each stage of medical education based on the I-E-O model. Finally, in order to further materialize and refine the previously established model for each stage of medical education, secondary expert consultation was conducted. As a result, the survey areas and time period for collecting longitudinal data were organized according to the model for each stage of medical education, and an example of the YUCM cohort constructed according to the established model for each stage of medical education was presented. The results derived from this study constitute a basic step toward building data from universities in longitudinal form, and if longitudinal data are actually constructed through this method, they could be used as an important basis for determining major policies or reorganizing the curricula of universities. These research results have implications in terms of the management and utilization of existing survey data, the composition of cohorts, and longitudinal studies for many medical schools that are conducting surveys in various areas targeting students, such as lecture evaluation and satisfaction surveys.

A Representation of Engineering Change Objects and Their Integrity Constraints Using an Active Object-Oriented Database Model (능동형 객체지향적 데이터베이스 모텔을 이용한 설계변경 개체 및 제약조건의 표현)

  • 도남철
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.1
    • /
    • pp.111-125
    • /
    • 2003
  • This paper proposes a product data model that can express and enforce integrity constraints on product structure during engineering changes (ECs). The model adopts and extends an active object-oriented database model in order to Integrate IC data and their integrity constraints. Tightly integrated with product structure, It will enable designers to maintain and exchange consistent EC data throughout the product life cycle. In order to properly support operations for ECs, the model provides the data, operations, and Event-Condition-Action rules for nested ECs and simultaneous EC applications to multiple options. in addition, the EC objects proposed In the model integrate the data and Integrity constraints into a unified repository. This repository enables designers to access all EC data and integrity constraints through the product structure and relationships between EC objects. This paper also describes a prototype product data management system based on the proposed model In order to demonstrate its effectiveness.

  • PDF

Design and Implementation of Multidimensional Data Model for OLAP Based on Object-Relational DBMS (OLAP을 위한 객체-관계 DBMS 기반 다차원 데이터 모델의 설계 및 구현)

  • 김은영;용환승
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6A
    • /
    • pp.870-884
    • /
    • 2000
  • Among OLAP(On-Line Analytical Processing) approaches, ROLAP(Relational OLAP) based on the star, snowflake schema which offer the multidimensional analytical method has performance problem and MOLAP (Multidimensional OLAP) based on Multidimensional Database System has scalability problem. In this paper, to solve the limitaions of previous approaches, design and implementation of multidimensional data model based on Object-Relation DBMS was proposed. With the extensibility of Object-Relation DBMS, it is possible to advent multidimensional data model which more expressively define multidimensional concept and analysis functions that are optimized for the defined multidimensional data model. In addition, through the hierarchy between data objects supported by Object-Relation DBMS, the aggregated data model which is inherited from the super-table, multidimensional data model, was designed. One these data models and functions are defined, they behave just like a built-in function, w th the full performance characteristics of Object-Relation DBMS engine.

  • PDF

Database Development for Archiving Detailed Design Information of Steel Bridges (강교량의 설계정보 데이터베이스 구축)

  • 이상호;정연석
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2003.04a
    • /
    • pp.313-320
    • /
    • 2003
  • The efficient and well organized database is essential for the management of information in every industrial field. In this study, a practical and effective database which can handle 3-D information of steel bridges is built on the basis of a STEP-based data model. The data model of steel bridge information is classified into geometric and non-geometric part and the design information is represented by linking geometric information and life cycle supported non-geometric information. Especially, the shape information is represented by boundary representation method, which is one of the representative methods of solid model information. In this study, ISO/STEP(STandard for the Exchange of product model data) AP203(configuration controlled design) EXPRESS schema is used to represent the shape information of steel bridge. The syntax of EXPRESS schema of developed data model is verified by NIST Expresso - is a tool for parsing and compiling EXPRESS schema. Also, this study verifies the conformance of the data model by applying to the real data of Hannam bridge. Therefore, the constructed database using STEP-based data model of steel bridges can be used effectively in the concurrent engineering point of view with transferring and sharing steel bridge information.

  • PDF

Efficient Certificate-Based Proxy Re-encryption Scheme for Data Sharing in Public Clouds

  • Lu, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2703-2718
    • /
    • 2015
  • Nowadays, public cloud storage is gaining popularity and a growing number of users are beginning to use the public cloud storage for online data storing and sharing. However, how the encrypted data stored in public clouds can be effectively shared becomes a new challenge. Proxy re-encryption is a public-key primitive that can delegate the decryption right from one user to another. In a proxy re-encryption system, a semi-trusted proxy authorized by a data owner is allowed to transform an encrypted data under the data owner's public key into a re-encrypted data under an authorized recipient's public key without seeing the underlying plaintext. Hence, the paradigm of proxy re-encryption provides a promising solution to effectively share encrypted data. In this paper, we propose a new certificate-based proxy re-encryption scheme for encrypted data sharing in public clouds. In the random oracle model, we formally prove that the proposed scheme achieves chosen-ciphertext security. The simulation results show that it is more efficient than the previous certificate-based proxy re-encryption schemes.