• Title/Summary/Keyword: 예제기반

Search Result 558, Processing Time 0.025 seconds

Application of a Fictitious Axial Force Factor to Determine Elastic and Inelastic Effective Lengths for Column Members of Steel Frames (강프레임 기둥 부재의 탄성 및 비탄성 유효좌굴길이 산정을 위한 가상축력계수의 적용)

  • Choi, Dong Ho;Yoo, Hoon;Lee, Yoon Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.2A
    • /
    • pp.81-92
    • /
    • 2010
  • In design of steel frames, it is generally believed that elastic system buckling analysis cannot predict real behaviors of structures, while inelastic system buckling analysis can give informative buckling behaviors of individual members considering inelastic material behavior. However, the use of Euler buckling equation with these system buckling analyses have the inherent problem that the methods evaluate unexpectedly large effective lengths of members having relatively small axial forces. This paper proposes a new method of obtaining elastic and inelastic effective lengths of all members in steel frames. Considering a fictitious axial force factor for each story of frames, the proposed method determines the effective lengths using the inelastic stiffness reduction factor and the iterative eigenvalue analysis. In order to verify the validity of the proposed method, the effective lengths of example frames by the proposed method were compared to those of previously established methods. As a result, the proposed method gives reasonable effective lengths of all members in steel frames. The effect of inelastic material behavior on the effective lengths of members was also discussed.

A Method for Extracting Equipment Specifications from Plant Documents and Cross-Validation Approach with Similar Equipment Specifications (플랜트 설비 문서로부터 설비사양 추출 및 유사설비 사양 교차 검증 접근법)

  • Jae Hyun Lee;Seungeon Choi;Hyo Won Suh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.55-68
    • /
    • 2024
  • Plant engineering companies create or refer to requirements documents for each related field, such as plant process/equipment/piping/instrumentation, in different engineering departments. The process-related requirements document includes not only a description of the process but also the requirements of the equipment or related facilities that will operate it. Since the authors and reviewers of the requirements documents are different, there is a possibility that inconsistencies may occur between equipment or parts design specifications described in different requirement documents. Ensuring consistency in these matters can increase the reliability of the overall plant design information. However, the amount of documents and the scattered nature of requirements for a same equipment and parts across different documents make it challenging for engineers to trace and manage requirements. This paper proposes a method to analyze requirement sentences and calculate the similarity of requirement sentences in order to identify semantically identical sentences. To calculate the similarity of requirement sentences, we propose a named entity recognition method to identify compound words for the parts and properties that are semantically central to the requirements. A method to calculate the similarity of the identified compound words for parts and properties is also proposed. The proposed method is explained using sentences in practical documents, and experimental results are described.

Development of hydro-mechanical-damage coupled model for low to intermediate radioactive waste disposal concrete silos (방사성폐기물 처분 사일로의 손상연동 수리-역학 복합거동 해석모델 개발)

  • Ji-Won Kim;Chang-Ho Hong;Jin-Seop Kim;Sinhang Kang
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.3
    • /
    • pp.191-208
    • /
    • 2024
  • In this study, a hydro-mechanical-damage coupled analysis model was developed to evaluate the structural safety of radioactive waste disposal structures. The Mazars damage model, widely used to model the fracture behavior of brittle materials such as rocks or concrete, was coupled with conventional hydro-mechanical analysis and the developed model was verified via theoretical solutions from literature. To derive the numerical input values for damage-coupled analysis, uniaxial compressive strength and Brazilian tensile strength tests were performed on concrete samples made using the mix ratio of the disposal concrete silo cured under dry and saturated conditions. The input factors derived from the laboratory-scale experiments were applied to a two-dimensional finite element model of the concrete silos at the Wolseong Nuclear Environmental Management Center in Gyeongju and numerical analysis was conducted to analyze the effects of damage consideration, analysis technique, and waste loading conditions. The hydro-mechanical-damage coupled model developed in this study will be applied to the long-term behavior and stability analysis of deep geological repositories for high-level radioactive waste disposal.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Estimation Model of an Origin-Destination Matrix from Traffic Counts Using a Conjugate Gradient Method (Conjugate Gradient 기법을 이용한 관측교통량 기반 기종점 OD행렬 추정 모형 개발)

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.43-62
    • /
    • 2004
  • Conventionally the estimation method of the origin-destination Matrix has been developed by implementing the expansion of sampled data obtained from roadside interview and household travel survey. In the survey process, the bigger the sample size is, the higher the level of limitation, due to taking time for an error test for a cost and a time. Estimating the O-D matrix from observed traffic count data has been applied as methods of over-coming this limitation, and a gradient model is known as one of the most popular techniques. However, in case of the gradient model, although it may be capable of minimizing the error between the observed and estimated traffic volumes, a prior O-D matrix structure cannot maintained exactly. That is to say, unwanted changes may be occurred. For this reason, this study adopts a conjugate gradient algorithm to take into account two factors: estimation of the O-D matrix from the conjugate gradient algorithm while reflecting the prior O-D matrix structure maintained. This development of the O-D matrix estimation model is to minimize the error between observed and estimated traffic volumes. This study validates the model using the simple network, and then applies it to a large scale network. There are several findings through the tests. First, as the consequence of consistency, it is apparent that the upper level of this model plays a key role by the internal relationship with lower level. Secondly, as the respect of estimation precision, the estimation error is lied within the tolerance interval. Furthermore, the structure of the estimated O-D matrix has not changed too much, and even still has conserved some attributes.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

A Study on Electron Dose Distribution of Cones for Intraoperative Radiation Therapy (수술중 전자선치료에 있어서 선량분포에 관한 연구)

  • Kang, Wee-Saing;Ha, Sung-Whan;Yun, Hyong-Geun
    • Progress in Medical Physics
    • /
    • v.3 no.2
    • /
    • pp.1-12
    • /
    • 1992
  • For intraoperative radiation therapy using electron beams, a cone system to deliver a large dose to the tumor during surgical operation and to save the surrounding normal tissue should be developed and dosimetry for the cone system is necessary to find proper X-ray collimator setting as well as to get useful data for clinical use. We developed a docking type of a cone system consisting of two parts made of aluminum: holder and cone. The cones which range from 4cm to 9cm with 1cm step at 100cm SSD of photon beam are 28cm long circular tubular cylinders. The system has two 26cm long holders: one for the cones larger than or equal to 7cm diamter and another for the smaller ones than 7cm. On the side of the holder is an aperture for insertion of a lamp and mirror to observe treatment field. Depth dose curve. dose profile and output factor at dept of dose maximum. and dose distribution in water for each cone size were measured with a p-type silicone detector controlled by a linear scanner for several extra opening of X-ray collimators. For a combination of electron energy and cone size, the opening of the X-ray collimator was caused to the surface dose, depths of dose maximum and 80%, dose profile and output factor. The variation of the output factor was the most remarkable. The output factors of 9MeV electron, as an example, range from 0.637 to 1.549. The opening of X-ray collimators would cause the quantity of scattered electrons coming to the IORT cone system. which in turn would change the dose distribution as well as the output factor. Dosimetry for an IORT cone system is inevitable to minimize uncertainty in the clinical use.

  • PDF

Use of ChatGPT in college mathematics education (대학수학교육에서의 챗GPT 활용과 사례)

  • Sang-Gu Lee;Doyoung Park;Jae Yoon Lee;Dong Sun Lim;Jae Hwa Lee
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.123-138
    • /
    • 2024
  • This study described the utilization of ChatGPT in teaching and students' learning processes for the course "Introductory Mathematics for Artificial Intelligence (Math4AI)" at 'S' University. We developed a customized ChatGPT and presented a learning model in which students supplement their knowledge of the topic at hand by utilizing this model. More specifically, first, students learn the concepts and questions of the course textbook by themselves. Then, for any question they are unsure of, students may submit any questions (keywords or open problem numbers from the textbook) to our own ChatGPT at https://math4ai.solgitmath.com/ to get help. Notably, we optimized ChatGPT and minimized inaccurate information by fully utilizing various types of data related to the subject, such as textbooks, labs, discussion records, and codes at http://matrix.skku.ac.kr/Math4AI-ChatGPT/. In this model, when students have questions while studying the textbook by themselves, they can ask mathematical concepts, keywords, theorems, examples, and problems in natural language through the ChatGPT interface. Our customized ChatGPT then provides the relevant terms, concepts, and sample answers based on previous students' discussions and/or samples of Python or R code that have been used in the discussion. Furthermore, by providing students with real-time, optimized advice based on their level, we can provide personalized education not only for the Math4AI course, but also for any other courses in college math education. The present study, which incorporates our ChatGPT model into the teaching and learning process in the course, shows promising applicability of AI technology to other college math courses (for instance, calculus, linear algebra, discrete mathematics, engineering mathematics, and basic statistics) and in K-12 math education as well as the Lifespan Learning and Continuing Education.