• Title/Summary/Keyword: Qualitative Models

Search Result 396, Processing Time 0.03 seconds

Forecasting Economic Impacts of Construction R&D Investment: A Quantitative System Dynamics Forecast Model Using Qualitative Data (건설 분야 정부 R&D 투자의 사업별 경제적 파급효과 분석 - 정성적 자료 기반의 시스템다이내믹스 예측모형 개발 -)

  • Hwang, Sungjoo;Park, Moonseo;Lee, Hyun-Soo;Jang, Youjin;Moon, Myung-Gi;Moon, Yeji
    • Korean Journal of Construction Engineering and Management
    • /
    • v.14 no.2
    • /
    • pp.131-140
    • /
    • 2013
  • Econometric forecast models based on past time-series data have been applied to a wide variety of applications due to their advantages in short-term point estimating. These models are particularly used in predicting the impact of governmental research and development (R&D) programs because program managers should assert their feasibility due to R&D program's huge amount of budget. The construction governmental R&D programs, however, separately make an investment by dividing total budget into five sub-business area. It make R&D program managers difficult to understand how R&D programs affect the whole system including economy because they are restricted with regard to many dependent and dynamic variables. In this regard, system dynamics (SD) model provides an analytic solution for complex, nonlinear, and dynamic systems such as the impacts of R&D programs by focusing on interactions among variables and understanding their structures. This research, therefore, developed SD model to capture the different impacts of five construction R&D sub-business by considering different characteristics of sub-business area. To overcome the SD's disadvantages in point estimating, this research also proposed the method for constructing quantitative forecasting model using qualitative data. Understanding the different characteristics of each construction R&D sub-business can support R&D program managers to demonstrate their feasibility of capital investment.

Plant breeding in the 21st century: Molecular breeding and high throughput phenotyping

  • Sorrells, Mark E.
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.14-14
    • /
    • 2017
  • The discipline of plant breeding is experiencing a renaissance impacting crop improvement as a result of new technologies, however fundamental questions remain for predicting the phenotype and how the environment and genetics shape it. Inexpensive DNA sequencing, genotyping, new statistical methods, high throughput phenotyping and gene-editing are revolutionizing breeding methods and strategies for improving both quantitative and qualitative traits. Genomic selection (GS) models use genome-wide markers to predict performance for both phenotyped and non-phenotyped individuals. Aerial and ground imaging systems generate data on correlated traits such as canopy temperature and normalized difference vegetative index that can be combined with genotypes in multivariate models to further increase prediction accuracy and reduce the cost of advanced trials with limited replication in time and space. Design of a GS training population is crucial to the accuracy of prediction models and can be affected by many factors including population structure and composition. Prediction models can incorporate performance over multiple environments and assess GxE effects to identify a highly predictive subset of environments. We have developed a methodology for analyzing unbalanced datasets using genome-wide marker effects to group environments and identify outlier environments. Environmental covariates can be identified using a crop model and used in a GS model to predict GxE in unobserved environments and to predict performance in climate change scenarios. These new tools and knowledge challenge the plant breeder to ask the right questions and choose the tools that are appropriate for their crop and target traits. Contemporary plant breeding requires teams of people with expertise in genetics, phenotyping and statistics to improve efficiency and increase prediction accuracy in terms of genotypes, experimental design and environment sampling.

  • PDF

A Causational Study for Urban 4-legged Signalized Intersections using Structural Equation Method (구조방정식을 이용한 도시부 4지 신호교차로의 사고원인 분석)

  • Oh, Jutaek;Lee, Sangkyu;Heo, Taeyoung;Hwang, Jeongwon
    • International Journal of Highway Engineering
    • /
    • v.14 no.6
    • /
    • pp.121-129
    • /
    • 2012
  • PURPOSES : Traffic accidents at intersections have been increased annually so that it is required to examine the causations to reduce the accidents. However, the current existing accident models were developed mainly with non-linear regression models such as Poisson methods. These non-linear regression methods lack to reveal complicated causations for traffic accidents, though they are right choices to study randomness and non-linearity of accidents. Therefore, to reveal the complicated causations of traffic accidents, this study used structural equation methods(SEM). METHODS : SEM used in this study is a statistical technique for estimating causal relations using a combination of statistical data and qualitative causal assumptions. SEM allow exploratory modeling, meaning they are suited to theory development. The method is tested against the obtained measurement data to determine how well the model fits the data. Among the strengths of SEM is the ability to construct latent variables: variables which are not measured directly, but are estimated in the model from several measured variables. This allows the modeler to explicitly capture the unreliability of measurement in the model, which allows the structural relations between latent variables to be accurately estimated. RESULTS : The study results showed that causal factors could be grouped into 3. Factor 1 includes traffic variables, and Factor 2 contains turning traffic variables. Factor 3 consists of other road element variables such as speed limits or signal cycles. CONCLUSIONS : Non-linear regression models can be used to develop accident predictions models. However, they lack to estimate causal factors, because they select only few significant variables to raise the accuracy of the model performance. Compared to the regressions, SEM has merits to estimate causal factors affecting accidents, because it allows the structural relations between latent variables. Therefore, this study used SEM to estimate causal factors affecting accident at urban signalized intersections.

International case study comparing PSA modeling approaches for nuclear digital I&C - OECD/NEA task DIGMAP

  • Markus Porthin;Sung-Min Shin;Richard Quatrain;Tero Tyrvainen;Jiri Sedlak;Hans Brinkman;Christian Muller;Paolo Picca;Milan Jaros;Venkat Natarajan;Ewgenij Piljugin;Jeanne Demgne
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4367-4381
    • /
    • 2023
  • Nuclear power plants are increasingly being equipped with digital I&C systems. Although some probabilistic safety assessment (PSA) models for the digital I&C of nuclear power plants have been constructed, there is currently no specific internationally agreed guidance for their modeling. This paper presents an initiative by the OECD Nuclear Energy Agency called "Digital I&C PSA - Comparative application of DIGital I&C Modelling Approaches for PSA (DIGMAP)", which aimed to advance the field towards practical and defendable modeling principles. The task, carried out in 2017-2021, used a simplified description of a plant focusing on the digital I&C systems important to safety, for which the participating organizations independently developed their own PSA models. Through comparison of the PSA models, sensitivity analyses as well as observations throughout the whole activity, both qualitative and quantitative lessons were learned. These include insights on failure behavior of digital I&C systems, experience from models with different levels of abstraction, benefits from benchmarking as well as major contributors to the core damage frequency and those with minor effect. The study also highlighted the challenges with modeling of large common cause component groups and the difficulties associated with estimation of key software and common cause failure parameters.

Assessment of LODs and Positional Accuracy for 3D Model based on UAV Images (무인항공영상 기반 3D 모델의 세밀도와 위치정확도 평가)

  • Lee, Jae One;Kim, Doo Pyo;Sung, Sang Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.197-205
    • /
    • 2020
  • Compared to aerial photogrammetry, UAV photogrammetry has advantages in acquiring and utilizing high-resolution images more quickly. The production of 3D models using UAV photogrammetry has become an important issue at a time when the applications of 3D spatial information are proliferating. Therefore, this study assessed the feasibility of utilizing 3D models produced by UAV photogrammetry through quantitative and qualitative analyses. The qualitative analysis was performed in accordance with the LODs (Level of Details) specified in the 3D Land Spatial Information Construction Regulation. The results showed that the features on planes have a high LoD while features with elevation differences have a low LoD due to the occlusion area and parallax. Quantitative analysis was performed using the 3D coordinates obtained from the CPs (Checkpoints) and edges of nearby structures. The mean errors for residuals at CPs were 0.042 m to 0.059 m in the horizontal and 0.050 m to 0.161 m in the vertical coordinates while the mean errors in the structure's edges were 0.068 m and 0.071 m in horizontal and vertical coordinates, respectively. Therefore, this study confirmed the potential of 3D models from UAV photogrammetry for analyzing the digital twin and slope as well as BIM (Building Information Modeling).

The Study on Possibility of Applying Word-Level Word Embedding Model of Literature Related to NOS -Focus on Qualitative Performance Evaluation- (과학의 본성 관련 문헌들의 단어수준 워드임베딩 모델 적용 가능성 탐색 -정성적 성능 평가를 중심으로-)

  • Kim, Hyunguk
    • Journal of Science Education
    • /
    • v.46 no.1
    • /
    • pp.17-29
    • /
    • 2022
  • The purpose of this study is to look qualitatively into how efficiently and reasonably a computer can learn themes related to the Nature of Science (NOS). In this regard, a corpus has been constructed focusing on literature (920 abstracts) related to NOS, and factors of the optimized Word2Vec (CBOW, Skip-gram) were confirmed. According to the four dimensions (Inquiry, Thinking, Knowledge and STS) of NOS, the comparative evaluation on the word-level word embedding was conducted. As a result of the study, according to the previous studies and the pre-evaluation on performance, the CBOW model was determined to be 200 for the dimension, five for the number of threads, ten for the minimum frequency, 100 for the number of repetition and one for the context range. And the Skip-gram model was determined to be 200 for the number of dimension, five for the number of threads, ten for the minimum frequency, 200 for the number of repetition and three for the context range. The Skip-gram had better performance in the dimension of Inquiry in terms of types of words with high similarity by model, which was checked by applying it to the four dimensions of NOS. In the dimensions of Thinking and Knowledge, there was no difference in the embedding performance of both models, but in case of words with high similarity for each model, they are sharing the name of a reciprocal domain so it seems that it is required to apply other models additionally in order to learn properly. It was evaluated that the dimension of STS also had the embedding performance that was not sufficient to look into comprehensive STS elements, while listing words related to solution of problems excessively. It is expected that overall implications on models available for science education and utilization of artificial intelligence could be given by making a computer learn themes related to NOS through this study.

Transgenic Mutagenesis Assay to Elucidaate the Mechanism of Mutation at Gene Level (유전자수준에서 돌연변이 유발기전을 밝히는 Transgenic Mutagenesis Assay)

  • Ryu, Jae-Chun;Youn, Ji-Youn;Cho, Kyung-Hae;Chang, Il-Moo
    • Environmental Mutagens and Carcinogens
    • /
    • v.18 no.1
    • /
    • pp.15-21
    • /
    • 1998
  • Transgenic animal and cell line models which are recently developed and used in toxicology fields combined with molecular biological technique, are powerful tools to study the mechanism of mutation in vivo and in vitro, respectively. Transgenic models, which have exogenous DNA incorporated into their genome, carry recoverable shuttle vector containing reporter genes to assess endogenous effects or alteration in specific genes related to disease processes. The lac I and lac Z gnee most widely used as a mutational target in transgenic systems. The assay is performed by treatment with putative mutagenic agents, isolation of genomic DNA from cells or tissues, exposure the isolated DNA to in vitro packaging extract, plating and sequencing. The results from these processes provide not only mutant frequency as quantitative evaluation but also mutational spectrum as qualitative evaluation of various agents. Therefore we introduce and review the principle, detailed procedure and application of transgenic mutagenesis assay system in toxicology fields especially in mutagenesis and carcinogenesis.

  • PDF

Two-layer Investment Decision-making Using Knowledge about Investor′s Risk-preference: Model and Empirical Testing.

  • Won, Chaehwan;Kim, Chulsoo
    • Management Science and Financial Engineering
    • /
    • v.10 no.1
    • /
    • pp.25-41
    • /
    • 2004
  • There have been many studies to build a model that can help investors construct optimal portfolio. Most of the previous models, however, are based upon the path-breaking Markowitz model (1959) which is a quantitative model. One of the most important problems with that kind of quantitative model is that, in reality, most of the investors use not only quantitative, but also qualitative information when they select their optimal portfolio. Since collecting both types of information from the markets are time consuming and expensive, making a set of target assets smaller, without suffering heavy loss in the rate of return, would attract investors. To extract only desired assets among all available assets, we need knowledge that identifies investors' preference for the risk of the assets. This study suggests two-layer decision-making rules capable of identifying an investor's risk preference and an architecture applying them to a quantitative portfolio model based on risk and expected return. Our knowledge-based portfolio system is to build an investor's preference-oriented portfolio. The empirical tests using the data from Korean capital markets show the results that our model contributes significantly to the construction of a better portfolio in the perspective of an investor's benefit/cost ratio than that produced by the existing portfolio models.

Experimental investigations on the structural behaviour of a distressed bridge

  • Dar, M.A.;Subramanian, N.;Dar, A.R.;Raju, J.
    • Structural Engineering and Mechanics
    • /
    • v.56 no.4
    • /
    • pp.695-705
    • /
    • 2015
  • Distressed structures require necessary remedial measures in order to restore their original structural properties like strength and stiffness. Validating the effectiveness of the proposed qualitative remedial measure experimentally is of utmost importance as there is no well-established analytical method to verify the effectiveness of the same quantitatively. Prototype testing which would have been the best option for this purpose would not only prove costly but also be associated with numerous practical difficulties; hence model testing is resorted as the only option for the purpose. This paper presents one such typical experimental study on the structural behavior of a distressed bridge, mainly observed in the form of prominent tilt in the bearing plate in transverse and longitudinal direction on downstream side. The main focus of the proposed experimental investigation is to assess the structural behavior particularly the load carrying capacity. The extent of deformation of some models with specific structural arrangements and some models with specific need based remedial measures were also studied. This study also assessed the contribution of each remedial measure towards restoration individually and collectively.

Self-terminated carbonation model as an useful support for durable concrete structure designing

  • Woyciechowski, Piotr P.;Sokolowska, Joanna J.
    • Structural Engineering and Mechanics
    • /
    • v.63 no.1
    • /
    • pp.55-64
    • /
    • 2017
  • The paper concerns concrete carbonation, the phenomena that occurs in every type of climate, especially in urban-industrial areas. In European Standards, including Eurocode (EC) for concrete structures the demanded durability of construction located in the conditions of the carbonation threat is mainly assured by the selection of suitable thickness of reinforcement cover. According to EC0 and EC2, the thickness of the cover in the particular class of exposure depends on the structural class/category and concrete compressive strength class which is determined by cement content and water-cement ratio (thus the quantitative composition) but it is not differentiated for various cements, nor additives (i.e., qualitative composition), nor technological types of concrete. As a consequence the selected thickness of concrete cover is in fact a far estimation - sometimes too exaggerated (too safe or too risky). The paper presents the elaborated "self-terminated carbonation model" that includes abovementioned factors and enables to indicate the maximal possible depth of carbonation. This is possible because presented model is a hyperbolic function of carbonation depth in time (the other models published in the literature use the parabolic function that theoretically assume the infinite increase of carbonation depth value). The paper discusses the presented model in comparison to other models published in the literature, moreover it contains the algorithm of concrete cover design with use of the model as well as an example of calculation of the cover thickness.