• Title/Summary/Keyword: Probability of Success

Search Result 271, Processing Time 0.032 seconds

Analysis of the Impact Relationship for Risk Factors on Big Data Projects Using SNA (SNA를 활용한 빅데이터 프로젝트의 위험요인 영향 관계 분석)

  • Park, Dae-Gwi;Kim, Seung-Hee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.79-86
    • /
    • 2021
  • In order to increase the probability of success in big data projects, quantified techniques are required to analyze the root cause of risks from complex causes and establish optimal countermeasures. To this end, this study measures risk factors and relationships through SNA analysis and presents a way to respond to risks based on them. In other words, it derives a dependency network matrix by utilizing the results of correlation analysis between risk groups in the big data projects presented in the preliminary study and performs SNA analysis. In order to derive the dependency network matrix, partial correlation is obtained from the correlation between the risk nodes, and activity dependencies are derived by node by calculating the correlation influence and correlation dependency, thereby producing the causal relationship between the risk nodes and the degree of influence between all nodes in correlation. Recognizing the root cause of risks from networks between risk factors derived through SNA between risk factors enables more optimized and efficient risk management. This study is the first to apply SNA analysis techniques in relation to risk management response, and the results of this study are significant in that it not only optimizes the sequence of risk management for major risks in relation to risk management in IT projects but also presents a new risk analysis technique for risk control.

Determine Optimal Timing for Out-Licensing of New Drugs in the Aspect of Biotech (신약의 기술이전 최적시기 결정 문제 - 바이오텍의 측면에서)

  • Na, Byungsoo;Kim, Jaeyoung
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.105-121
    • /
    • 2020
  • With regard to the development of new drugs, what is most important for a Korean Biotech, where no global sales network has been established, is decision-making related to out-licensing of new drugs. The probability of success for each clinical phase is different, and the licensing amount and its royalty vary depending on which clinical phase the licensing contract is made. Due to the nature of such a licensing contract and Biotech's weak financial status, it is a very important decision-making issue for a Biotech to determine when to license out to a Big Pharma. This study defined a model called 'optimal timing for out-licensing of new drugs' and the results were derived from the decision tree analysis. As a case study, we applied to a Biotech in Korea, which is conducting FDA global clinical trials for a first-in-class new drug. Assuming that the market size and expected market penetration rate of the target disease are known, it has been shown that out-licensing after phase 1 or phase 2 of clinical trials is a best alternative that maximizes Biotech's profits. This study can provide a conceptual framework for the use of management science methodologies in pharmaceutical fields, thus laying the foundation for knowledge and research on out-licensing of new drugs.

Integrated stratigraphy approach for new additional limestone reserves in the Paleozoic Taebacksan Basin, Korea (고생대 태백산 분지 석회석 자원의 신규 추가 매장량 확보를 위한 통합 층서적 접근)

  • 유인창
    • Economic and Environmental Geology
    • /
    • v.36 no.2
    • /
    • pp.59-74
    • /
    • 2003
  • Prospecting for energy and mineral resources is essential kind of public fundamentals that manage the nation's economy. Most explorations in the past were concentrated in the simple structural traps in relatively shallow depth. Due to their vast exploitation, recent history has shown that the emphasis in explorations has steadily shifted toward the subtle stratigraphic traps in deeper level. Increasing exploration for the subtle stratigraphic traps in deeper level requires precise correlation and assessment of deeply buried strata in the basin. However, the descriptive strati-graphic principles used for evaluation of the simple structural traps are limited to delineate the subtle stratigraphic traps in deeper depth. As this occurs. it is imperative to establish a new stratigraphic paradigm that allows a more sophisticated understanding on the basin stratigraphy. This study provides an exemplary application of integrated stratigraphic approach to defining basin history of the Middle Ordovician Taebacksan Basin, Korea. The integrated stratigraphic approach gives much better insight to unravel the stratigraphic response to tectonic evolution of the basins, which can be utilized fer enhancing the efficiency of resources exploration and development in the basins. Thus, the integrated stratigraphic approach should be emphasized as a new stratigraphic norm that can improve the probability of success in any type of resources exploration and development project.

Risk Identification and Priority method for Overseas LNG Plant Projects - Focusing on Design Phase - (해외 LNG 플랜트 리스크요인 도출 및 우선순위 평가 - 설계단계를 중심으로 -)

  • Jang, Woo-Sik;Hong, Hwa-Uk;Han, Seung-Heon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.12 no.5
    • /
    • pp.146-154
    • /
    • 2011
  • Korean contractors have been maintained sustainable growth since entering into overseas construction market for the first time in 1960' s. In 2010, Korean contractors ordered 761 billion (USD) from overseas markets. Especially, billion (USD) were earned by Korean contractors in overseas plant construction market which account for more than 80% of the total amount by Korean contractors. Nevertheless, many Korean contractors are suffering from lack of technological competitiveness and construction management skills in the design phase compared with global leading contractors. These conditions have directly effect on the success of projects in terms of cost, duration, and quality. So, this study focused on identifying the risk factors and developing risk priority method for the design phase of LNG plant projects whose market is expanding. Research procedures were conducted by the following three steps. First, total 57 risk factors were identified in design phase through extensive literature reviews and experts survey. Second, the authors developed risk priority method which are more suitable for design phase of LNG plant projects by using three criteria, Probability(P), Impact(I), and Coordination Index(CI). Finally, the suitability of risk priority method and practical applicability were verified through expert survey and interview. Consequently, if korean contractors use the suggested risk factors and priority method based on their own know-how and experiences, then more reasonable and rational risk management will be conducted in the design phase of LNG plant projects.

Duty Cycle Scheduling considering Delay Time Constraints in Wireless Sensor Networks (무선네트워크에서의 지연시간제약을 고려한 듀티사이클 스케쥴링)

  • Vu, Duy Son;Yoon, Seokhoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.169-176
    • /
    • 2018
  • In this paper, we consider duty-cycled wireless sensor networks (WSNs) in which sensor nodes are periodically dormant in order to reduce energy consumption. In such networks, as the duty cycle interval increases, the energy consumption decreases. However, a higher duty cycle interval leads to the increase in the end-to-end (E2E) delay. Many applications of WSNs are delay-sensitive and require packets to be delivered from the sensr nodes to the sink with delay requirements. Most of existing studies focus on only reducing the E2E delay, rather than considering the delay bound requirement, which makes hard to achieve the balanced performance between E2E delay and energy consumption. A few study that considered delay bound requirement require time synchronization between neighboring nodes or a specific distribution of deployed nodes. In order to address limitations of existing works, we propose a duty-cycle scheduling algorithm that aims to achieve low energy consumption, while satisfying the delay requirements. To that end, we first estimate the probability distribution for the E2E delay. Then, by using the obtained distribution we determine the maximal duty cycle interval that still satisfies the delay constraint. Simulation results show that the proposed design can satisfy the given delay bound requirements while achieving low energy consumption.

Improvement and Validation of Convective Rainfall Rate Retrieved from Visible and Infrared Image Bands of the COMS Satellite (COMS 위성의 가시 및 적외 영상 채널로부터 복원된 대류운의 강우강도 향상과 검증)

  • Moon, Yun Seob;Lee, Kangyeol
    • Journal of the Korean earth science society
    • /
    • v.37 no.7
    • /
    • pp.420-433
    • /
    • 2016
  • The purpose of this study is to improve the calibration matrixes of 2-D and 3-D convective rainfall rates (CRR) using the brightness temperature of the infrared $10.8{\mu}m$ channel (IR), the difference of brightness temperatures between infrared $10.8{\mu}m$ and vapor $6.7{\mu}m$ channels (IR-WV), and the normalized reflectance of the visible channel (VIS) from the COMS satellite and rainfall rate from the weather radar for the period of 75 rainy days from April 22, 2011 to October 22, 2011 in Korea. Especially, the rainfall rate data of the weather radar are used to validate the new 2-D and 3-DCRR calibration matrixes suitable for the Korean peninsula for the period of 24 rainy days in 2011. The 2D and 3D calibration matrixes provide the basic and maximum CRR values ($mm\;h^{-1}$) by multiplying the rain probability matrix, which is calculated by using the number of rainy and no-rainy pixels with associated 2-D (IR, IR-WV) and 3-D (IR, IR-WV, VIS) matrixes, by the mean and maximum rainfall rate matrixes, respectively, which is calculated by dividing the accumulated rainfall rate by the number of rainy pixels and by the product of the maximum rain rate for the calibration period by the number of rain occurrences. Finally, new 2-D and 3-D CRR calibration matrixes are obtained experimentally from the regression analysis of both basic and maximum rainfall rate matrixes. As a result, an area of rainfall rate more than 10 mm/h is magnified in the new ones as well as CRR is shown in lower class ranges in matrixes between IR brightness temperature and IR-WV brightness temperature difference than the existing ones. Accuracy and categorical statistics are computed for the data of CRR events occurred during the given period. The mean error (ME), mean absolute error (MAE), and root mean squire error (RMSE) in new 2-D and 3-D CRR calibrations led to smaller than in the existing ones, where false alarm ratio had decreased, probability of detection had increased a bit, and critical success index scores had improved. To take into account the strong rainfall rate in the weather events such as thunderstorms and typhoon, a moisture correction factor is corrected. This factor is defined as the product of the total precipitable waterby the relative humidity (PW RH), a mean value between surface and 500 hPa level, obtained from a numerical model or the COMS retrieval data. In this study, when the IR cloud top brightness temperature is lower than 210 K and the relative humidity is greater than 40%, the moisture correction factor is empirically scaled from 1.0 to 2.0 basing on PW RH values. Consequently, in applying to this factor in new 2D and 2D CRR calibrations, the ME, MAE, and RMSE are smaller than the new ones.

Technology Innovation Activity and Default Risk (기술혁신활동이 부도위험에 미치는 영향 : 한국 유가증권시장 및 코스닥시장 상장기업을 중심으로)

  • Kim, Jin-Su
    • Journal of Technology Innovation
    • /
    • v.17 no.2
    • /
    • pp.55-80
    • /
    • 2009
  • Technology innovation activity plays a pivotal role in constructing the entrance barrier for other firms and making process improvement and new product. and these activities give a profit increase and growth to firms. Thus, technology innovation activity can reduce the default risk of firms. However, technology innovation activity can also increase the firm's default risk because technology innovation activity requires too much investment of the firm's resources and has the uncertainty on success. The purpose of this study is to examine the effect of technology innovation activity on the default risk of firms. This study's sample consists of manufacturing firms listed on the Korea Securities Market and The Kosdaq Market from January 1,2000 to December 31, 2008. This study makes use of R&D intensity as an proxy variable of technology innovation activity. The default probability which proxies the default risk of firms is measured by the Merton's(l974) debt pricing model. The main empirical results are as follows. First, from the empirical results, it is found that technology innovation activity has a negative and significant effect on the default risk of firms independent of the Korea Securities Market and Kosdaq Market. In other words, technology innovation activity reduces the default risk of firms. Second, technology innovation activity reduces the default risk of firms independent of firm size, firm age, and credit score. Third, the results of robust analysis also show that technology innovation activity is the important factor which decreases the default risk of firms. These results imply that a manager must show continuous interest and investment in technology innovation activity of one's firm. And a policymaker also need design an economic policy to promote the technology innovation activity of firms.

  • PDF

Physicochemical Characteristics and Varietal Improvement Related to Palatability of Cooked Rice or Suitability to Food Processing in Rice (쌀 식미 및 가공적성에 관련된 이화학적 특성)

  • 최해춘
    • Proceedings of the Korean Journal of Food and Nutrition Conference
    • /
    • 2001.12a
    • /
    • pp.39-74
    • /
    • 2001
  • The endeavors enhancing the grain quality of high-yielding japonica rice were steadily continued during 1980s∼1990s along with the self-sufficiency of rice production and the increasing demands of high-quality rices. During this time, considerably great, progress and success was obtained in development of high-quality japonica cultivars and qualify evaluation techniques including the elucidation of interrelationship between the physicochemical properties of rice grain and the physical or palatability components of cooked rice. In 1990s, some high-quality japonica rice caltivars and special rices adaptable for food processing such as large kernel, chalky endosperm aromatic and colored rices were developed and its objective preference and utility was also examined by a palatability meter, rapid-visco analyzer and texture analyzer. The water uptake rate and the maximum water absorption ratio showed significantly negative correlations with the K/Mg ratio and alkali digestion value(ADV) of milled rice. The rice materials showing the higher amount of hot water absorption exhibited the larger volume expansion of cooked rice. The harder rices with lower moisture content revealed the higher rate of water uptake at twenty minutes after soaking and the higher ratio of maximum water uptake under the room temperature condition. These water uptake characteristics were not associated with the protein and amylose contents of milled rice and the palatability of cooked rice. The water/rice ratio (in w/w basis) for optimum cooking was averaged to 1.52 in dry milled rices (12% wet basis) with varietal range from 1.45 to 1.61 and the expansion ratio of milled rice after proper boiling was average to 2.63(in v/v basis). The major physicochemical components of rice grain associated with the palatability of cooked rice were examined using japonica rice materials showing narrow varietal variation in grain size and shape, alkali digestibility, gel consistency, amylose and protein contents, but considerable difference in appearance and torture of cooked rice. The glossiness or gross palatability score of cooked rice were closely associated with the peak. hot paste and consistency viscosities of viscogram with year difference. The high-quality rice variety “Ilpumbyeo” showed less portion of amylose on the outer layer of milled rice grain and less and slower change in iodine blue value of extracted paste during twenty minutes of boiling. This highly palatable rice also exhibited very fine net structure in outer layer and fine-spongy and well-swollen shape of gelatinized starch granules in inner layer and core of cooked rice kernel compared with the poor palatable rice through image of scanning electronic mcroscope. Gross sensory score of cooked rice could be estimated by multiple linear regression formula, deduced from relationship between rice quality components mentioned above and eating quality of cooked rice, with high Probability of determination. The ${\alpha}$ -amylose-iodine method was adopted for checking the varietal difference in retrogradation of cooked rice. The rice cultivars revealing the relatively slow retrogradation in aged cooked rice were Ilpumbyeo, Chucheongbyeo, Sasanishiki, Jinbubyeo and Koshihikari. A Tongil-type rice, Taebaegbyeo, and a japonica cultivar, Seomjinbyeo, shelved the relatively fast deterioration of cooked rice. Generally, the better rice cultivars in eating quality of cooked rice showed less retrogiadation and much sponginess in cooled cooked rice. Also, the rice varieties exhibiting less retrogradation in cooled cooked rice revealed higher hot viscosity and lower cool viscosity of rice flour in amylogram. The sponginess of cooled cooked rice was closely associated with magnesium content and volume expansion of cooked rice. The hardness-changed ratio of cooked rice by cooling was negatively correlated with solids amount extracted during boiling and volume expansion of cooked rice. The major physicochemical properties of rice grain closely related to the palatability of cooked rice may be directly or indirectly associated with the retrogradation characteristics of cooked rice. The softer gel consistency and lower amylose content in milled rice revealed the higher ratio of popped rice and larger bulk density of popping. The stronger hardness of rice grain showed relatively higher ratio of popping and the more chalky or less translucent rice exhibited the lower ratio of intact popped brown rice. The potassium and magnesium contents of milled rice were negatively associated with gross score of noodle making mixed with wheat flour in half and the better rice for noodle making revealed relatively less amount of solid extraction during boiling. The more volume expansion of batters for making brown rice bread resulted the better loaf formation and more springiness in rice bread. The higher protein rices produced relatively the more moist white rice bread. The springiness of rice bread was also significantly correlated with high amylose content and hard gel consistency. The completely chalky and large gram rices showed better suitability for fermentation and brewing. Our breeding efforts on rice quality improvement for the future should focus on enhancement of palatability of cooked rice and marketing qualify as well as the diversification in morphological and physicochemical characteristics of rice grain for various value-added rice food processings.

  • PDF

A prognosis discovering lethal-related genes in plants for target identification and inhibitor design (식물 치사관련 유전자를 이용하는 신규 제초제 작용점 탐색 및 조절물질 개발동향)

  • Hwang, I.T.;Lee, D.H.;Choi, J.S.;Kim, T.J.;Kim, B.T.;Park, Y.S.;Cho, K.Y.
    • The Korean Journal of Pesticide Science
    • /
    • v.5 no.3
    • /
    • pp.1-11
    • /
    • 2001
  • New technologies will have a large impact on the discovery of new herbicide site of action. Genomics, combinatorial chemistry, and bioinformatics help take advantage of serendipity through tile sequencing of huge numbers of genes or the synthesis of large numbers of chemical compounds. There are approximately $10^{30}\;to\;10^{50}$ possible molecules in molecular space of which only a fraction have been synthesized. Combining this potential with having access to 50,000 plant genes in the future elevates tile probability of discovering flew herbicidal site of actions. If 0.1, 1.0 or 10% of total genes in a typical plant are valid for herbicide target, a plant with 50,000 genes would provide about 50, 500, and 5,000 targets, respectively. However, only 11 herbicide targets have been identified and commercialized. The successful design of novel herbicides depends on careful consideration of a number of factors including target enzyme selections and validations, inhibitor designs, and the metabolic fates. Biochemical information can be used to identify enzymes which produce lethal phenotypes. The identification of a lethal target site is an important step to this approach. An examination of the characteristics of known targets provides of crucial insight as to the definition of a lethal target. Recently, antisense RNA suppression of an enzyme translation has been used to determine the genes required for toxicity and offers a strategy for identifying lethal target sites. After the identification of a lethal target, detailed knowledge such as the enzyme kinetics and the protein structure may be used to design potent inhibitors. Various types of inhibitors may be designed for a given enzyme. Strategies for the selection of new enzyme targets giving the desired physiological response upon partial inhibition include identification of chemical leads, lethal mutants and the use of antisense technology. Enzyme inhibitors having agrochemical utility can be categorized into six major groups: ground-state analogues, group specific reagents, affinity labels, suicide substrates, reaction intermediate analogues, and extraneous site inhibitors. In this review, examples of each category, and their advantages and disadvantages, will be discussed. The target identification and construction of a potent inhibitor, in itself, may not lead to develop an effective herbicide. The desired in vivo activity, uptake and translocation, and metabolism of the inhibitor should be studied in detail to assess the full potential of the target. Strategies for delivery of the compound to the target enzyme and avoidance of premature detoxification may include a proherbicidal approach, especially when inhibitors are highly charged or when selective detoxification or activation can be exploited. Utilization of differences in detoxification or activation between weeds and crops may lead to enhance selectivity. Without a full appreciation of each of these facets of herbicide design, the chances for success with the target or enzyme-driven approach are reduced.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.