• Title/Summary/Keyword: quantitative models

Search Result 1,011, Processing Time 0.021 seconds

2D-QSAR Analyses on the Binding Affinity Constants of Tetrahydropyrane and Tetrahydrofurane Analogues against Bovine Odorant Binding Protein and Predicted of High Active Molecules (Bovine Ordorant Binding Protein에 대한 Tetrahydropyrane 및 Tetrahydrofurane 유도체들의 결합 친화력 상수에 관한 2D-QSAR 분석과 고활성 분자의 예측)

  • Park, Chang-Sik;Sung, Nack-Do
    • Reproductive and Developmental Biology
    • /
    • v.33 no.3
    • /
    • pp.119-123
    • /
    • 2009
  • The two dimensional quantitative structure-activity relationships (2D-QSARs) models concerning the binding affinity constants ($p[Od.]_{50}$) between 2-cyclohexyltetrahydropyrane and 2-cyclohexyltetrahydrofurane analogues as substrates, and bovine odorant binding protein (bOBP) as receptor were derived by multiple regression analyses method and discussed. The statistical quality of the optimized 2D-QSAR model (5) was good (r=0.907). From the model, the binding affinity constants ($p[Od.]_{50}$) were dependent upon the optimal value ($(TL)_{opt.}$=2.737) of total lipole (TL) of substrate molecules. Based on these findings, the high active compounds predicted by optimized 2D-QSAR model (5) were 2-(dimethylcyclohexyl)tetrahydropyrane molecule and their isomer molecules. The binding affinity constants regarding bOBP of the tetrahydrofuryl-2-yl family compounds were dependent upon the hydrophobicity (logP) of whole substrate molecules. In any case of porcine odorant-binding proteins (pOBP), the constants were dependent upon the hydrophobicity (${\pi}x={\log}P_X-{\log}P_H$) of substituents (R) in substrate molecules. Also, from the optimal values of hydrophobic constant, the hydrophobicity for bOBP influenced ca. twice time bigger (bOBP>pOBP) than that for pOBP.

L.E.O. Satellite Power Subsystem Reliability Analysis

  • Zahran M.;Tawfik S.;Dyakov Gennady
    • Journal of Power Electronics
    • /
    • v.6 no.2
    • /
    • pp.104-113
    • /
    • 2006
  • Satellites have provided the impetus for the orderly development of reliability engineering research and analysis because they tend to have complex systems and hence acute problems. They were instrumental in developing mathematical models for reliability, as well as design techniques to permit quantitative specification, prediction and measurement of reliability. Reliability engineering is based on implementing measures which insure an item will perform its mission successfully. The discipline of reliability engineering consists of two fundamental aspects; $(1^{st})$ paying attention to details, and $(2^{nd})$ handling uncertainties. This paper uses some of the basic concepts, formulas and examples of reliability theory in application. This paper emphasizes the practical reliability analysis of a Low Earth Orbit (LEO) Micro-satellite power subsystem. Approaches for specifying and allocating the reliability of each element of the power system so as to meet the overall power system reliability requirements, as well as to give detailed modeling and predicting of equipment/system reliability are introduced. The results are handled and analyzed to form the final reliability results for the satellite power system. The results show that the Electric Power Subsystem (EPS) reliability meets the requirements with quad microcontrollers (MC), two boards working as main and cold redundant while each board contains two MCs in a hot redundant.

Quantitative Characterization of Internal Fibrillation of Pulp Fiber

  • Won, Jong-Myoung;Lee, Jae-Hun
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.39 no.1 s.119
    • /
    • pp.1-7
    • /
    • 2007
  • Internal fibrillation of pulp fiber is an important factor affecting paper properties. Internal fibrillation of pulp fiber is usually introduced with several kinds of modifications of fiber by the mechanical treatment such as refining, high shear and/or high consistency mixing, etc. Unfortunately there are no standardized methods that can characterize the extent of internal fibrillation and its contribution on the paper properties. The purpose of this study is to try and find the potential methods that can characterize the internal fibrillation of pulp fiber quantitatively. Softwood bleached kraft pulp was treated with Hobart mixer to introduce the internal fibrillation without the significant fiber damage and external fibrillation. The extent of internal fibrillation was increased with the increase of mechanical treatment consistency. Several fiber properties were measured to find the potential means that could characterize and quantity the internal fibrillation. Laminated area could not be used as a means for quantifying the internal fibrillation because of the effect of swelling and the different internal fibrillation behavior at different mechanical treatment consistency. Micro and macro internal fibrillation models were proposed for describing the different behavior for the mechanical treatment at low and high consistencies of pulp. The Internal fibrillation showed good correlation with swelling of fiber wall. This trend was confirmed through the measurement of wall thickness and/or cross section area of fiber. Therefore the internal fibrillation possibly can be described as the indices indicating the change of wall thickness and/or cross section area.

A Quantitative Estimation of Welding Residual Stress Relaxation for Fatigue Strength Analysis (피로강도해석을 위한 용접잔류응력 이완의 정량적 평가)

  • Han, Seung-Ho;Lee, Tak-Kee;Shin, Byung-Chun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.10
    • /
    • pp.2018-2025
    • /
    • 2002
  • It is well known that the strength and the fatigue life of welded steel components are affected extensively by welding residual stresses distributed around their weldments under not only monotonic but also cyclic loads. The externally applied loads are to be superimposed with the welding residual stresses, so that unexpected deformations and failures of the components might occur. These residual stresses are not kept constant, but relaxed or redistributed during in service. Under monotonic loads the relaxation takes place when the sum of external and welding residual stress exceeds locally the yield stress of material used. By the way, it is shown that under cyclic loads the welding residual stress is considerably relieved by the first or the early cycles of loads, and then gradually relaxed with increasing loading cycles. Although many investigations in this field have been carried out, the phenomenon and mechanism of the stress relaxation are still not clear, and there are few comprehensive models to predict amount of relaxed welding residual stress. In this study, the characteristics of the welding residual stress relaxation under monotonic and cyclic loads were investigated, and a model to predict quantitatively amount of welding residual stress relaxation was proposed.

Using SG Arrays for Hydrology in Comparison with GRACE Satellite Data, with Extension to Seismic and Volcanic Hazards

  • Crossley David;Hinderer Jacques
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.31-49
    • /
    • 2005
  • We first review some history of the Global Geodynamics Project (GGP), particularly in the progress of ground-satellite gravity comparisons. The GGP Satellite Project has involved the measurement of ground-based superconducting gravimeters (SGs) in Europe for several years and we make quantitative comparisons with the latest satellite GRACE data and hydrological models. The primary goal is to recover information about seasonal hydrology cycles, and we find a good correlation at the microgal level between the data and modeling. One interesting feature of the data is low soil moisture resulting from the European heat wave in 2003. An issue with the ground-based stations is the possibility of mass variations in the soil above a station, and particularly for underground stations these have to be modeled precisely. Based on this work with a regional array, we estimate the effectiveness of future SG arrays to measure co-seismic deformation and silent-slip events. Finally we consider gravity surveys in volcanic areas, and predict the accuracy in modeling subsurface density variations over time periods from months to years.

Performance-based evaluation of strap-braced cold-formed steel frames using incremental dynamic analysis

  • Davani, M.R.;Hatami, S.;Zare, A.
    • Steel and Composite Structures
    • /
    • v.21 no.6
    • /
    • pp.1369-1388
    • /
    • 2016
  • This study is an effort to clearly recognize the seismic damages occurred in strap-braced cold formed steel frames. In order to serve this purpose, a detailed investigation was conducted on 9 full scale strap-braced CFS walls and the required data were derived from the results of the experiments. As a consequence, quantitative and qualitative damage indices have been proposed in three seismic performance levels. Moreover, in order to assess seismic performance of the strap-braced CFS frames, a total of 8 models categorized into three types are utilized. Based on the experimental results, structural characteristics are calculated and all frames have been modeled as single degree of freedom systems. Incremental dynamic analysis using OPENSEES software is utilized to calculate seismic demand of the strap-braced CFS walls. Finally, fragility curves are calculated based on three damage limit states proposed by this paper. The results showed that the use of cladding and other elements, which contribute positively to the lateral stiffness and strength, increase the efficiency of strap-braced CFS walls in seismic events.

Application of Diffraction Tomography to GPR Data (지표레이다 자료에 대한 회절지오토모그래피의 적용성 연구)

  • Kim Geun-Young;Shin Changsoo;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.64-70
    • /
    • 1998
  • Diffraction tomography (DT) is a quantitative technique for high resolution subsurface imaging. In general DT algorithm is used for crosswell imaging. In this study high resolution GPR DT algorithm which is able to reconstruct high resolution image of subsurface structures in multi-monostatic geometry is developed. Developed algorithm is applied to finite difference data and its criteria of application and its limit are studied. Inversion parameters (number of imaging frequency, regularization factor, frequency range) are deduced from isolated weak scattering model. And the usuability of the algorithm is proved by applying to models which break the weak scattering approximation.

  • PDF

A Study on Assessment of Vessel Traffic Safety Management by Marine Traffic Flow Simulation (해상교통류 시뮬레이션에 의한 해상교통안전관리평가에 관한 연구)

  • Park Young- Soo;Jong Jae-Yong;Inoue Kinzo
    • Journal of the Korea Society for Simulation
    • /
    • v.11 no.4
    • /
    • pp.43-55
    • /
    • 2002
  • Vessel traffic safety management means the managerial technical measures for improving the marine traffic safety in general terms. The main flow of vessel traffic safety management is that: 1) Traffic Survey, 2) Replay by Marine Traffic Flow Simulation, 3) Quantitative Assessment, 4) Policy Alternatives, 5) Prediction·Verification. In the management of vessel traffic safety, it is most important to establish assessment models that can numerically estimate the current safety level and quantitatively predict the correlation between the measures to be taken and the improvement of safety and the reduction of ship handling difficulties imposed on mariners. In this paper, the replay model for traffic flow simulation was made using marine traffic survey data, and the present traffic situation became replay in the computer. An attempt was made to rate the current safety of ports and waterways by applying the Environmental Stress model. And, as a countermeasure for traffic management, by taking of, the promotion of total traffic congestion in early morning rush hour, the correlation between traffic control rate and the reduction in ship handling difficulties imposed on mariners was predicted quantitatively.

  • PDF

IMPACT ANALYSES AND TESTS OF CONCRETE OVERPACKS OF SPENT NUCLEAR FUEL STORAGE CASKS

  • Lee, Sanghoon;Cho, Sang-Soon;Jeon, Je-Eon;Kim, Ki-Young;Seo, Ki-Seog
    • Nuclear Engineering and Technology
    • /
    • v.46 no.1
    • /
    • pp.73-80
    • /
    • 2014
  • A concrete cask is an option for spent nuclear fuel interim storage. A concrete cask usually consists of a metallic canister which confines the spent nuclear fuel assemblies and a concrete overpack. When the overpack undergoes a missile impact, which might be caused by a tornado or an aircraft crash, it should sustain an acceptable level of structural integrity so that its radiation shielding capability and the retrievability of the canister are maintained. A missile impact against a concrete overpack produces two damage modes, local damage and global damage. In conventional approaches [1], those two damage modes are decoupled and evaluated separately. The local damage of concrete is usually evaluated by empirical formulas, while the global damage is evaluated by finite element analysis. However, this decoupled approach may lead to a very conservative estimation of both damages. In this research, finite element analysis with material failure models and element erosion is applied to the evaluation of local and global damage of concrete overpacks under high speed missile impacts. Two types of concrete overpacks with different configurations are considered. The numerical simulation results are compared with test results, and it is shown that the finite element analysis predicts both local and global damage qualitatively well, but the quantitative accuracy of the results are highly dependent on the fine-tuning of material and failure parameters.

An experience on the model-based evaluation of pharmacokinetic drug-drug interaction for a long half-life drug

  • Hong, Yunjung;Jeon, Sangil;Choi, Suein;Han, Sungpil;Park, Maria;Han, Seunghoon
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.25 no.6
    • /
    • pp.545-553
    • /
    • 2021
  • Fixed-dose combinations development requires pharmacokinetic drugdrug interaction (DDI) studies between active ingredients. For some drugs, pharmacokinetic properties such as long half-life or delayed distribution, make it difficult to conduct such clinical trials and to estimate the exact magnitude of DDI. In this study, the conventional (non-compartmental analysis and bioequivalence [BE]) and model-based analyses were compared for their performance to evaluate DDI using amlodipine as an example. Raw data without DDI or simulated data using pharmacokinetic models were compared to the data obtained after concomitant administration. Regardless of the methodology, all the results fell within the classical BE limit. It was shown that the model-based approach may be valid as the conventional approach and reduce the possibility of DDI overestimation. Several advantages (i.e., quantitative changes in parameters and precision of confidence interval) of the model-based approach were demonstrated, and possible application methods were proposed. Therefore, it is expected that the model-based analysis is appropriately utilized according to the situation and purpose.