• Title/Summary/Keyword: time history kernel

Search Result 7, Processing Time 0.019 seconds

Non-linear rheology of tension structural element under single and variable loading history Part II: Creep of steel rope - examples and parametrical study

  • Kmet, S.;Holickova, L.
    • Structural Engineering and Mechanics
    • /
    • v.18 no.5
    • /
    • pp.591-607
    • /
    • 2004
  • The substance of the use of the derived non-linear creep constitutive equations under variable stress levels (see first part of the paper, Kmet 2004) is explained and the strategy of their application is outlined using the results of one-step creep tests of the steel spiral strand rope as an example. In order to investigate the creep strain increments of cables an experimental set-up was originally designed and a series of tests were carried out. Attention is turned to the individual main steps in the production and application procedure, i.e., to the one-step creep tests, definition of loading history, determination of the kernel functions, selection and definition of constitutive equation and to the comparison of the resulting values considering the product and the additive forms of the approximation of the kernel functions. To this purpose, the parametrical study is performed and the results are presented. The constitutive equations of non-linear creep of cable under variable stress history offer a strong tool for the real simulation of stochastic variable load history and prediction of realistic time-dependent response (current deflection and stress configuration) of structures with cable elements. By means of suitable stress combination and its gradual repeating various loads and times effects can be modelled.

Criteria in ′Landscape and Memory′ as Sense of Place for the Sustainable Development of Korean Mountainous Landscape

  • Jino Kwon;Shin, Joon-Hwan;Park, Myoung-Sub
    • The Korean Journal of Quaternary Research
    • /
    • v.17 no.2
    • /
    • pp.85-99
    • /
    • 2003
  • Since the experience of major landscape change during last half of the century due to war and rapid urbanization, the traditional character has been weakened, and it is necessary for a reconsideration to improve the landscape for the future. To review these relationships, the importance of a comprehensive understanding of nature has been suggested. Therefore identification of a new concept based on the 'socio-cultural influence of landscape' and 'sense of place' which are related to peoples' previous experience, is required. Furthermore more practical definitions and criteria to reveal the relationship are necessary. Among the terms suggested to describe sense of place such as 'home', 'place identity', 'place-based meaning' and 'settlement identity' etc., the 'home' is selected to represent our surrounding landscape. For more practical classification of home landscape, additional terms are suggested and defined based on both the relationships between human beings and nature, and between memory derived from previous experience and shared values with in the community. The additional terms which are the most important in the role of landscape character related to humans' are; ⅰ) Personal Landscape: Landscape of an individual human, which derives from previous personal experience; involves distinguishable character for a given person, and it is emotional and flexible depending on circumstances. ⅱ) Ordinary Landscape: Landscape of the 'common interest' between members of a community, which is acceptable as a surrounding for everyday daily life, it produces the richness and variety of landscape. ⅲ) Kernel Landscape: Landscape of the 'common ground' which is acceptable to the majority members of the community, and it provides variety and stability for periods of time, and it could strongly represent community attitudes toward nature. ⅳ) Prototype Landscape: Landscape as the 'common denominator' of overall community from past to present and towards the future, which encompasses all the kernel landscape throughout history. It provides a sense of place, balances the homogeneity of character throughout overall communities. Some part of this can be shared throughout history to shape an overall sense of place. It can also represent short terms fashions. For a prototype landscape to reveal sense of place, there are a couple of points which we should underline the commencing point. Firstly, understanding the relationship between humans and nature should be based on a given character of surroundings. Secondly, reoccurring landscape elements which have sustained in history can lead to sense of place, and should be reviewed the influences between nature and humans.

  • PDF

Design Sensitivity Analysis of Coupled MD-Continuum Systems Using Bridging Scale Approach (브리징 스케일 기법을 이용한 분자동역학-연속체 연성 시스템의 설계민감도 해석)

  • Cha, Song-Hyun;Ha, Seung-Hyun;Cho, Seonho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.27 no.3
    • /
    • pp.137-145
    • /
    • 2014
  • We present a design sensitivity analysis(DSA) method for multiscale problems based on bridging scale decomposition. In this paper, we utilize a bridging scale method for the coupled system analysis. Since the analysis of full MD systems requires huge amount of computational costs, a coupled system of MD-level and continuum-level simulation is usually preferred. The information exchange between the MD and continuum levels is taken place at the MD-continuum boundary. In the bridging scale method, a generalized Langevin equation(GLE) is introduced for the reduced MD system and the GLE force using a time history kernel is applied at the boundary atoms in the MD system. Therefore, we can separately analyze the MD and continuum level simulations, which can accelerate the computing process. Once the simulation of coupled problems is successful, the need for the DSA is naturally arising for the optimization of macro-scale design, where the macro scale performance of the system is maximized considering the micro scale effects. The finite difference sensitivity is impractical for the gradient based optimization of large scale problems due to the restriction of computing costs but the analytical sensitivity for the coupled system is always accurate. In this study, we derive the analytical design sensitivity to verify the accuracy and applicability to the design optimization of the coupled system.

Important measure analysis of uncertainty parameters in bridge probabilistic seismic demands

  • Song, Shuai;Wu, Yuan H.;Wang, Shuai;Lei, Hong G.
    • Earthquakes and Structures
    • /
    • v.22 no.2
    • /
    • pp.157-168
    • /
    • 2022
  • A moment-independent importance measure analysis approach was introduced to quantify the effects of structural uncertainty parameters on probabilistic seismic demands of simply supported girder bridges. Based on the probability distributions of main uncertainty parameters in bridges, conditional and unconditional bridge samples were constructed with Monte-Carlo sampling and analyzed in the OpenSees platform with a series of real seismic ground motion records. Conditional and unconditional probability density functions were developed using kernel density estimation with the results of nonlinear time history analysis of the bridge samples. Moment-independent importance measures of these uncertainty parameters were derived by numerical integrations with the conditional and unconditional probability density functions, and the uncertainty parameters were ranked in descending order of their importance. Different from Tornado diagram approach, the impacts of uncertainty parameters on the whole probability distributions of bridge seismic demands and the interactions of uncertainty parameters were considered simultaneously in the importance measure analysis approach. Results show that the interaction of uncertainty parameters had significant impacts on the seismic demand of components, and in some cases, it changed the most significant parameters for piers, bearings and abutments.

Brand Imaging a City for Tourism (관광 콘텐츠 개발을 위한 도시 브랜드화)

  • Lim, Seong-Taek
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.3
    • /
    • pp.127-137
    • /
    • 2008
  • Major purpose of brand establishment of city is to give pride for the citizen and to enhance the city value through improving city image. As modern society wants aggressive and active attitude from all fields, the city, which means as human place of residence, has to try to change for human life and prosperity. It is true that the establishment of brand is shown through politics, economy, society, culture and art, however travel effect and profit creation should be most important. In actual circumstance of our country, that travel deficit is getting increased. the brand establishment of city is more concerned. To build a city, history and time is essential elements. It may be impossible that consistent direction and meaning continue throughout long terms, but after all, the kernel of tourism contents is that idea and development is concentrated focusing consistent direction and meaning. To solve this, problem of strategy and direction was researched through analysis of foreign cities, and also understanding of future role of city in 21st century make a base for rebirth as international travel city. After city brand establishment based on continuous management, it is very important to make city which has strong image power.

History of the Photon Beam Dose Calculation Algorithm in Radiation Treatment Planning System

  • Kim, Dong Wook;Park, Kwangwoo;Kim, Hojin;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.54-62
    • /
    • 2020
  • Dose calculation algorithms play an important role in radiation therapy and are even the basis for optimizing treatment plans, an important feature in the development of complex treatment technologies such as intensity-modulated radiation therapy. We reviewed the past and current status of dose calculation algorithms used in the treatment planning system for radiation therapy. The radiation-calculating dose calculation algorithm can be broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. Factor-based algorithms are a type of empirical dose calculation that interpolates or extrapolates the dose in some basic measurements. Model-based algorithms, represented by the pencil beam convolution, analytical anisotropic, and collapse cone convolution algorithms, use a simplified physical process by using a convolution equation that convolutes the primary photon energy fluence with a kernel. Model-based algorithms allowing side scattering when beams are transmitted to the heterogeneous media provide more precise dose calculation results than correction-based algorithms. Principle-based algorithms, represented by Monte Carlo dose calculations, simulate all real physical processes involving beam particles during transportation; therefore, dose calculations are accurate but time consuming. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.