• Title/Summary/Keyword: single dynamic model

Search Result 631, Processing Time 0.029 seconds

Numerical Analysis of Nuclear-Power Plant Subjected to an Aircraft Impact using Parallel Processor (병렬프로세서를 이용한 원전 격납건물의 항공기 충돌해석)

  • Song, Yoo-Seob;Shin, Sang-Shup;Jung, Dong-Ho;Park, Tae-Hyo
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.24 no.6
    • /
    • pp.715-722
    • /
    • 2011
  • In this paper, the behavior of nuclear-power plant subjected to an aircraft impact is performed using the parallel analysis. In the erstwhile study of an aircraft impact to the nuclear-power plant, it has been used that the impact load is applied at the local area by using the impact load-time history function of Riera, and the target structures have been restricted to the simple RC(Reinforced Concrete) walls or RC buildings. However, in this paper, the analysis of an aircraft impact is performed by using a real aircraft model similar to the Boeing 767 and a fictitious nuclear-power plant similar to the real structure, and an aircraft model is verified by comparing the generated history of the aircraft crash against the rigid target with another history by using the Riera's function which is allowable in the impact evaluation guide, NEI07-13(2009). Also, in general, it is required too much time for the hypervelocity impact analysis due to the contact problems between two or more adjacent physical bodies and the high nonlinearity causing dynamic large deformation, so there is a limitation with a single CPU alone to deal with these problems effectively. Therefore, in this paper, Message-Passing MIMD type of parallel analysis is performed by using self-constructed Linux-Cluster system to improve the computational efficiency, and in order to evaluate the parallel performance, the four cases of analysis, i.e. plain concrete, reinforced concrete, reinforced concrete with bonded containment liner plate, steel-plate concrete structure, are performed and discussed.

Evaluation of Impact Factor in Composite Cable-Stayed Bridges under Reliability-based Live Load Model (신뢰도 기반 활하중모델에 의한 강합성 사장교의 충격계수 평가)

  • Park, Jae Bong;Park, Yong Myung;Kim, Dong Hyun;Lee, Jong Han
    • Journal of Korean Society of Steel Construction
    • /
    • v.25 no.4
    • /
    • pp.335-346
    • /
    • 2013
  • AASHTO LRFD and Korean Bridge Design Code (Limit State Design) specify to consider Truck and Lane load simultaneously determined from reliability-based live load model, and impact shall be applied to the truck load while it shall not be applied to the lane load. In this paper, vehicle-bridge interaction analysis under moving truck and lane loads were performed to estimate impact factor of the cables and girders for the selected multi-cable-stayed composite bridges with 230m, 400m and 540m main span. A 6-d.o.f. vehicle was used for truck load and a series of single-axle vehicles was applied to simulate equivalent lane load. The effect of damping ratio on the impact factor was estimated and then the essential parameters to impact factor, i.e., road surface roughness and vehicle speed were considered. The road surface roughness was randomly generated based on ISO 8608 and it was applied to the truck load only in the vehicle-bridge interaction analysis. The impact factors evaluated from dynamic interaction analysis were also compared with those by the influence line method that is currently used in design practice to estimate impact factor in cable-stayed bridge.

Development of Small-sized Model of Ray-type Underwater Glider and Performance Test (Ray형 수중글라이더 소형 축소모델 개발 및 성능시험)

  • Choi, Hyeung-sik;Lee, Sung-wook;Kang, Hyeon-seok;Duc, Nguyen Ngoc;Kim, Seo-kang;Jeong, Seong-hoon;Chu, Peter C.;Kim, Joon-young
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.6
    • /
    • pp.537-543
    • /
    • 2017
  • Underwater glider is the long-term operating underwater robot that was developed with a purpose of continuous oceanographic observations and explorations. Torpedo-type underwater glider is not efficient from an aspect of maneuverability, because it uses a single buoyancy engine and motion controller for obtaining propulsive forces and moments. This paper introduces a ray-type underwater glider(RUG) with dual buoyancy engine, which improves the control performance of buoyancy and motion compared with torpedo-type underwater glider. Carrying out Computational Fluid Dynamics (CFD) analysis as static pitch drift test, the performance of fluid resistance for gliding motion was identified. Based on the calculated hydrodynamic coefficients, the dynamic simulation compared and analyzed the motion performance of torpedo-type and ray-type while controlling same volume of buoyancy engine. Small-sized model of RUG was developed to perform fundamental performance tests.

Experimental Study on the Behavior Characteristics of Single Steel Pile in Sand Subjected to Lateral Loadings (사질토 지반에서 수평하중에 따른 단일강관말뚝의 거동특성에 관한 실험적 연구)

  • Kim, Daehyeon;Lee, Tae-Gwang;Kim, Sun-Hak
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.5
    • /
    • pp.3548-3556
    • /
    • 2015
  • In order to fulfill the needs of reliable and economically feasible foundation, engineers should consider not only the working load that can endure extreme conditions but also apprehending precise behavior of continuous dynamic load while designing the foundation of offshore wind power generators. To actualize the foundation, a model pile was made in miniature. Also, calibration chamber was made and a 500mm height of sand-bed was made to perform "static lateral load experiment" and "repetitive loading experiment", total of two Lateral load tests. As a result, in Static Lateral load test, the bigger length/diameter of model pile led an increase in load displacement. However, when performing "Cyclic Lateral load test", the increase in number of under loading led the decrease in horizontal displacement from each repeated lateral load. While performing Static Lateral load test and repeated loading experiment, we could observe the decreasing in the rate of ultimate lateral load capacity increase of the pile. Also, it turned out that the higher relative density of the ground, the lower ultimate lateral load capacity by repeated horizontal loading.

A New Efficient Private Key Reissuing Model for Identity-based Encryption Schemes Including Dynamic Information (동적 ID 정보가 포함된 신원기반 암호시스템에서 효율적인 키 재발급 모델)

  • Kim, Dong-Hyun;Kim, Sang-Jin;Koo, Bon-Seok;Ryu, Kwon-Ho;Oh, Hee-Kuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.15 no.2
    • /
    • pp.23-36
    • /
    • 2005
  • The main obstacle hindering the wide deployment of identity-based cryptosystem is that the entity responsible for creating the private key has too much power. As a result, private keys are no longer private. One obvious solution to this problem is to apply the threshold technique. However, this increases the authentication computation, and communication cost during the key issuing phase. In this paper, we propose a new effi ient model for issuing multiple private keys in identity-based encryption schemes based on the Weil pairing that also alleviates the key escrow problem. In our system, the private key of a user is divided into two components, KGK (Key Description Key) and KUD(Key Usage Desscriptor), which are issued separately by different parties. The KGK is issued in a threshold manner by KIC (Key Issuing Center), whereas the KW is issued by a single authority called KUM (Key Usage Manager). Changing KW results in a different private key. As a result, a user can efficiently obtain a new private key by interacting with KUM. We can also adapt Gentry's time-slot based private key revocation approach to our scheme more efficiently than others. We also show the security of the system and its efficiency by analyzing the existing systems.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study of the Influence of Short-Term Air-Sea Interaction on Precipitation over the Korean Peninsula Using Atmosphere-Ocean Coupled Model (기상-해양 접합모델을 이용한 단기간 대기-해양 상호작용이 한반도 강수에 미치는 영향 연구)

  • Han, Yong-Jae;Lee, Ho-Jae;Kim, Jin-Woo;Koo, Ja-Yong;Lee, Youn-Gyoun
    • Journal of the Korean earth science society
    • /
    • v.40 no.6
    • /
    • pp.584-598
    • /
    • 2019
  • In this study, the effects of air-sea interactions on precipitation over the Seoul-Gyeonggi region of the Korean Peninsula from 28 to 30 August 2018, were analyzed using a Regional atmosphere-ocean Coupled Model (RCM). In the RCM, a WRF (Weather Research Forecasts) was used as the atmosphere model whereas ROMS (Regional Oceanic Modeling System) was used as the ocean model. In a Regional Single atmosphere Model (RSM), only the WRF model was used. In addition, the sea surface temperature data of ECMWF Reanalysis Interim was used as low boundary data. Compared with the observational data, the RCM considering the effect of air-sea interaction represented that the spatial correlations were 0.6 and 0.84, respectively, for the precipitation and the Yellow Sea surface temperature in the Seoul-Gyeonggi area, which was higher than the RSM. whereas the mean bias error (MBE) was -2.32 and -0.62, respectively, which was lower than the RSM. The air-sea interaction effect, analyzed by equivalent potential temperature, SST, dynamic convergence fields, induced the change of SST in the Yellow Sea. In addition, the changed SST caused the difference in thermal instability and kinematic convergence in the lower atmosphere. The thermal instability and convergence over the Seoul-Gyeonggi region induced upward motion, and consequently, the precipitation in the RCM was similar to the spatial distribution of the observed data compared to the precipitation in the RSM. Although various case studies and climatic analyses are needed to clearly understand the effects of complex air-sea interaction, this study results provide evidence for the importance of the air-sea interaction in predicting precipitation in the Seoul-Gyeonggi region.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

A Study on Developing Sensibility Model for Visual Display (시각 디스플레이에서의 감성 모형 개발 -움직임과 색을 중심으로-)

  • 임은영;조경자;한광희
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.2
    • /
    • pp.1-15
    • /
    • 2004
  • The structure of sensibility from motion was developed for the purpose of understanding relationship between sensibilities and physical factors to apply it to dynamic visual display. Seventy adjectives were collected by assessing adequacy to express sensibilities from motion and reporting sensibilities recalled from dynamic displays with achromatic color. Various motion displays with a moving single dot were rated according to the degree of sensibility corresponding to each adjective, on the basis of the Semantic Differential (SD) method. The results of assessment were analyzed by means of the factor analysis to reduce 70 words into 19 fundamental sensibilities from motion. The Multidimensional Scaling (MDS) technique constructed the sensibility space in motion, in which 19 sensibilities were scattered with two dimensions, active-passive and bright-dark Motion types systemically varied in kinematic factors were placed on the two-dimensional space of motion sensibility, in order to analyze important variables affecting sensibility from motion. Patterns of placement indicate that speed and both of cycle and amplitude in trajectories tend to partially determine sensibility. Although color and motion affected sensibility according to the in dimensions, it seemed that combination of motion and color made each have dominant effect individually in a certain sensibility dimension, motion to active-passive and color to bright-dark.

  • PDF

Glass Dissolution Rates From MCC-1 and Flow-Through Tests

  • Jeong, Seung-Young
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2004.06a
    • /
    • pp.257-258
    • /
    • 2004
  • The dose from radionuclides released from high-level radioactive waste (HLW) glasses as they corrode must be taken into account when assessing the performance of a disposal system. In the performance assessment (PA) calculations conducted for the proposed Yucca Mountain, Nevada, disposal system, the release of radionuclides is conservatively assumed to occur at the same rate the glass matrix dissolves. A simple model was developed to calculate the glass dissolution rate of HLW glasses in these PA calculations [1]. For the PA calculations that were conducted for Site Recommendation, it was necessary to identify ranges of parameter values that bounded the dissolution rates of the wide range of HLW glass compositions that will be disposed. The values and ranges of the model parameters for the pH and temperature dependencies were extracted from the results of SPFT, static leach tests, and Soxhlet tests available in the literature. Static leach tests were conducted with a range of glass compositions to measure values for the glass composition parameter. The glass dissolution rate depends on temperature, pH, and the compositions of the glass and solution, The dissolution rate is calculated using Eq. 1: $rate{\;}={\;}k_{o}10^{(ph){\eta})}{\cdot}e^{(-Ea/RT)}{\cdot}(1-Q/K){\;}+{\;}k_{long}$ where $k_{0},\;{\eta}$ and Eaare the parameters for glass composition, pH, $\eta$ and temperature dependence, respectively, and R is the gas constant. The term (1-Q/K) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_{0},\;{\eta}\;and\;E_{a}$ are the parameters for glass composition, pH, and temperature dependence, respectively, and R is the gas constant. The term (1-Q/C) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_0$, and Ea are determined under test conditions where the value of Q is maintained near zero, so that the value of the affinity term remains near 1. The dissolution rate under conditions in which the value of the affinity term is near 1 is referred to as the forward rate. This is the highest dissolution rate that can occur at a particular pH and temperature. The value of the parameter K is determined from experiments in which the value of the ion activity product approaches the value of K. This results in a decrease in the value of the affinity term and the dissolution rate. The highly dilute solutions required to measure the forward rate and extract values for $k_0$, $\eta$, and Ea can be maintained by conducting dynamic tests in which the test solution is removed from the reaction cell and replaced with fresh solution. In the single-pass flow-through (PFT) test method, this is done by continuously pumping the test solution through the reaction cell. Alternatively, static tests can be conducted with sufficient solution volume that the solution concentrations of dissolved glass components do not increase significantly during the test. Both the SPFT and static tests can ve conducted for a wide range of pH values and temperatures. Both static and SPFt tests have short-comings. the SPFT test requires analysis of several solutions (typically 6-10) at each of several flow rates to determine the glass dissolution rate at each pH and temperature. As will be shown, the rate measured in an SPFt test depends on the solution flow rate. The solutions in static tests will eventually become concentrated enough to affect the dissolution rate. In both the SPFt and static test methods. a compromise is required between the need to minimize the effects of dissolved components on the dissolution rate and the need to attain solution concentrations that are high enough to analyze. In the paper, we compare the results of static leach tests and SPFT tests conducted with simple 5-component glass to confirm the equivalence of SPFT tests and static tests conducted with pH buffer solutions. Tests were conducted over the range pH values that are most relevant for waste glass disssolution in a disposal system. The glass and temperature used in the tests were selected to allow direct comparison with SPFT tests conducted previously. The ability to measure parameter values with more than one test method and an understanding of how the rate measured in each test is affected by various test parameters provides added confidence to the measured values. The dissolution rate of a simple 5-component glass was measured at pH values of 6.2, 8.3, and 9.6 and $70^{\circ}C$ using static tests and single-pass flow-through (SPFT) tests. Similar rates were measured with the two methods. However, the measured rates are about 10X higher than the rates measured previously for a glass having the same composition using an SPFT test method. Differences are attributed to effects of the solution flow rate on the glass dissolution reate and how the specific surface area of crushed glass is estimated. This comparison indicates the need to standardize the SPFT test procedure.

  • PDF