• Title/Summary/Keyword: Reinforcement methods

Search Result 988, Processing Time 0.03 seconds

Reduced model experiment to review applicability of tunnel pillar reinforcement method using prestress and steel pipe reinforcement grouting (프리스트레스 및 강관보강 그라우팅을 이용한 터널 필라부 보강공법의 적용성 검토를 위한 축소모형 실험)

  • Kim, Yeon-Deok;Lee, Soo-Jin;Lee, Pyung-Woo;Yun, Hong-Su;Kim, Sang-Hwan
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.495-512
    • /
    • 2022
  • Due to the concentration of population in the city center, the aboveground structures are saturated, and the development of underground structures becomes important. In addition, it is necessary to apply the reinforcement construction method for the pillar part of the adjacent tunnel that can secure stability, economy, and workability to the site. In this study, the tunnel pillar reinforcement method using prestress and grouting was reviewed. There are various reinforcement methods that can compensate for the problems of the side tunnel, but as the tunnel pillar construction method using prestress and grouting is judged to be excellent in field applicability, stability, and economic feasibility, it is necessary to review the theoretical and numerical analysis of the actual behavior mechanism. Therefore, a scaled-down model experiment was conducted. The reduced model experiment was divided into PC stranded wire + steel pipe reinforcement grouting + prestress (Case 1), PC strand + steel pipe reinforcement grouting (Case 2), and no reinforcement (Case 3), and the displacement of the pillar and the earth pressure applied to the wall were measured. Through experiments, it was confirmed that the PC stranded wire + steel pipe reinforcement grouting + prestress method is the most excellent reinforcement method among various construction methods. It was judged that it could be derived.

Experimental Study on the Strengthening Effect of External Prestressing Method Considering Deterioration (구조물 노후도를 반영한 외부긴장 보강 효과에 관한 실험적 연구)

  • Kim, Sang-Hyun;Jung, Woo-Tai;Kang, Jae-Yoon;Park, Hee-Beom;Park, Jong-Sup
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.1
    • /
    • pp.1-6
    • /
    • 2021
  • Concrete structures gradually age due to deterioration of materials or excess loads and environmental factors, and their performance decreases, affecting the usability and safety of structures. Although external tension construction methods are widely used among the reinforcement methods of old bridges, it is insufficient to identify the effects and effects of reinforcement depending on the level of aging. Therefore, in this study, a four-point loading experiment was conducted on the subject with the non-reinforced and external tensioning method to confirm the reinforcement effect of the external tensioning method, assuming the aging of the structure as a reduction in the compressive strength and tensile reinforcement of concrete, to analyze the behavior of the reinforcement and confirm the reinforcement effect. As a result of the experiment, it was difficult to identify the amount of reinforcement in the extreme condition due to early elimination of the anchorage. Therefore, compliance with the regulations on anchor bolts is required when applying the external tension reinforcement method. Crack load and yield load increased depending on whether external tension was reinforced, but before the crack, the stiffness before and after reinforcement was similar, making it difficult to confirm the reinforcement effect.

Direct design of partially prestressed concrete solid beams

  • Alnuaimi, A.S.
    • Structural Engineering and Mechanics
    • /
    • v.27 no.6
    • /
    • pp.741-771
    • /
    • 2007
  • Tests were conducted on two partially pre-stressed concrete solid beams subjected to combined loading of bending, shear and torsion. The beams were designed using the Direct Design Method which is based on the Lower Bound Theorem of the Theory of Plasticity. Both beams were of $300{\times}300mm$ cross-section and 3.8 m length. The two main variables studied were the ratio of the maximum shear stress due to the twisting moment, to the shear stress arising from the shear force, which was varied between 0.69 and 3.04, and the ratio of the maximum twisting moment to the maximum bending moment which was varied between 0.26 and 1.19. The required reinforcement from the Direct Design Method was compared with requirements from the ACI and the BSI codes. It was found that, in the case of bending dominance, the required longitudinal reinforcements from all methods were close to each other while the BSI required much larger transverse reinforcement. In the case of torsion dominance, the BSI method required much larger longitudinal and transverse reinforcement than the both the ACI and the DDM methods. The difference in the transverse reinforcement is more pronounce. Experimental investigation showed good agreement between design and experimental failure loads of the beams designed using the Direct Design Method. Both beams failed within an acceptable range of the design loads and underwent ductile behaviour up to failure. The results indicate that the Direct Design Method can be successfully used to design partially prestressed concrete solid beams which cater for the combined effect of bending, shear and torsion loads.

Comparison of Reinforcement Learning Algorithms used in Game AI (게임 인공지능에 사용되는 강화학습 알고리즘 비교)

  • Kim, Deokhyung;Jung, Hyunjun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.693-696
    • /
    • 2021
  • There are various algorithms in reinforcement learning, and the algorithm used differs depending on the field. Even in games, specific algorithms are used when developing AI (artificial intelligence) using reinforcement learning. Different algorithms have different learning methods, so artificial intelligence is created differently. Therefore, the developer has to choose the appropriate algorithm to implement the AI for the purpose. To do that, the developer needs to know the algorithm's learning method and which algorithms are effective for which AI. Therefore, this paper compares the learning methods of three algorithms, SAC, PPO, and POCA, which are algorithms used to implement game AI. These algorithms are practical to apply to which types of AI implementations.

  • PDF

Aspect-based Sentiment Analysis of Product Reviews using Multi-agent Deep Reinforcement Learning

  • M. Sivakumar;Srinivasulu Reddy Uyyala
    • Asia pacific journal of information systems
    • /
    • v.32 no.2
    • /
    • pp.226-248
    • /
    • 2022
  • The existing model for sentiment analysis of product reviews learned from past data and new data was labeled based on training. But new data was never used by the existing system for making a decision. The proposed Aspect-based multi-agent Deep Reinforcement learning Sentiment Analysis (ADRSA) model learned from its very first data without the help of any training dataset and labeled a sentence with aspect category and sentiment polarity. It keeps on learning from the new data and updates its knowledge for improving its intelligence. The decision of the proposed system changed over time based on the new data. So, the accuracy of the sentiment analysis using deep reinforcement learning was improved over supervised learning and unsupervised learning methods. Hence, the sentiments of premium customers on a particular site can be explored to other customers effectively. A dynamic environment with a strong knowledge base can help the system to remember the sentences and usage State Action Reward State Action (SARSA) algorithm with Bidirectional Encoder Representations from Transformers (BERT) model improved the performance of the proposed system in terms of accuracy when compared to the state of art methods.

Creating damage tolerant intersections in composite structures using tufting and 3D woven connectors

  • Clegg, Harry M.;Dell'Anno, Giuseppe;Partridge, Ivana K.
    • Advances in aircraft and spacecraft science
    • /
    • v.6 no.2
    • /
    • pp.145-156
    • /
    • 2019
  • As the industrial desire for a step change in productivity within the manufacture of composite structures increases, so does the interest in Through-Thickness Reinforcement technologies. As manufacturers look to increase the production rate, whilst reducing cost, Through-Thickness Reinforcement technologies represent valid methods to reinforce structural joints, as well as providing a potential alternative to mechanical fastening and bolting. The use of tufting promises to resolve the typically low delamination resistance, which is necessary when it comes to creating intersections within complex composite structures. Emerging methods include the use of 3D woven connectors, and orthogonally intersecting fibre packs, with the components secured by the selective insertion of microfasteners in the form of tufts. Intersections of this type are prevalent in aeronautical applications, as a typical connection to be found in aircraft wing structures, and their intersections with the composite skin and other structural elements. The common practice is to create back-to-back composite "L's", or to utilise a machined metallic connector, mechanically fastened to the remainder of the structure. 3D woven connectors and selective Through-Thickness Reinforcement promise to increase the ultimate load that the structure can bear, whilst reducing manufacturing complexity, increasing the load carrying capability and facilitating the automated production of parts of the composite structure. This paper provides an overview of the currently available methods for creating intersections within composite structures and compares them to alternatives involving the use of 3D woven connectors, and the application of selective Through-Thickness Reinforcement for enhanced damage tolerance. The use of tufts is investigated, and their effect on the load carrying ability of the structure is examined. The results of mechanical tests are presented for each of the methods described, and their failure characteristics examined.

Actor-Critic Reinforcement Learning System with Time-Varying Parameters

  • Obayashi, Masanao;Umesako, Kosuke;Oda, Tazusa;Kobayashi, Kunikazu;Kuremoto, Takashi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.138-141
    • /
    • 2003
  • Recently reinforcement learning has attracted attention of many researchers because of its simple and flexible learning ability for any environments. And so far many reinforcement learning methods have been proposed such as Q-learning, actor-critic, stochastic gradient ascent method and so on. The reinforcement learning system is able to adapt to changes of the environment because of the mutual action with it. However when the environment changes periodically, it is not able to adapt to its change well. In this paper we propose the reinforcement learning system that is able to adapt to periodical changes of the environment by introducing the time-varying parameters to be adjusted. It is shown that the proposed method works well through the simulation study of the maze problem with aisle that opens and closes periodically, although the conventional method with constant parameters to be adjusted does not works well in such environment.

  • PDF

A Joint Allocation Algorithm of Computing and Communication Resources Based on Reinforcement Learning in MEC System

  • Liu, Qinghua;Li, Qingping
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.721-736
    • /
    • 2021
  • For the mobile edge computing (MEC) system supporting dense network, a joint allocation algorithm of computing and communication resources based on reinforcement learning is proposed. The energy consumption of task execution is defined as the maximum energy consumption of each user's task execution in the system. Considering the constraints of task unloading, power allocation, transmission rate and calculation resource allocation, the problem of joint task unloading and resource allocation is modeled as a problem of maximum task execution energy consumption minimization. As a mixed integer nonlinear programming problem, it is difficult to be directly solve by traditional optimization methods. This paper uses reinforcement learning algorithm to solve this problem. Then, the Markov decision-making process and the theoretical basis of reinforcement learning are introduced to provide a theoretical basis for the algorithm simulation experiment. Based on the algorithm of reinforcement learning and joint allocation of communication resources, the joint optimization of data task unloading and power control strategy is carried out for each terminal device, and the local computing model and task unloading model are built. The simulation results show that the total task computation cost of the proposed algorithm is 5%-10% less than that of the two comparison algorithms under the same task input. At the same time, the total task computation cost of the proposed algorithm is more than 5% less than that of the two new comparison algorithms.

Two Circle-based Aircraft Head-on Reinforcement Learning Technique using Curriculum (커리큘럼을 이용한 투서클 기반 항공기 헤드온 공중 교전 강화학습 기법 연구)

  • Insu Hwang;Jungho Bae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.4
    • /
    • pp.352-360
    • /
    • 2023
  • Recently, AI pilots using reinforcement learning are developing to a level that is more flexible than rule-based methods and can replace human pilots. In this paper, a curriculum was used to help head-on combat with reinforcement learning. It is not easy to learn head-on with a reinforcement learning method without a curriculum, but in this paper, through the two circle-based head-on air combat learning technique, ownship gradually increase the difficulty and become good at head-on combat. On the two-circle, the ATA angle between the ownship and target gradually increased and the AA angle gradually decreased while learning was conducted. By performing reinforcement learning with and w/o curriculum, it was engaged with the rule-based model. And as the win ratio of the curriculum based model increased to close to 100 %, it was confirmed that the performance was superior.

The Study of Saamchimbeop's Method of Reinforcement and Reduction (사암침법(舍巖鍼法)의 보사수기법(補瀉手技法)에 관한 연구(硏究))

  • Ahn, Jeong-Ran;Lee, In-Seon
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.19 no.2
    • /
    • pp.113-123
    • /
    • 2009
  • Objectives : The purpose of this study is what Saamchimbeop's method of reinforcement and reduction. Methods : 1. We reffered to the Bo-Sa method of DongeuiBo-gam(東醫寶鑑), Uihakim-mun(醫學入門), Uihakjeong-jeon(醫學正傳), Chimgugyeongheom-bang(鍼灸經驗方), Biaoyou-fu(標幽賦) in Cimgudaeseong(鍼灸大成), Nei-Jing(內經). 2. We make a conjecture that Zheng(正), Ying(迎), Sui(隨), Xie(斜) Yingzheng(迎正), Duo(奪), Zhenghuoxie(正或斜), Wen(溫), Liang(凉), JongYang-Inyin (從陽引陰) in Saamchimbeop are another expression of method of reinforcement and reduction and compared with the method of reinforcement and reduction of DongeuiBo-gam(東醫寶鑑), Uihakim-mun(醫學入門), Uihakjeong-jeon(醫學正傳), Chimgugyeongheom-bang(鍼灸經驗方), Biaoyou-fu(標幽賦) in Cimgudaeseong(鍼灸大成), Nei-Jing(內經). Results : 1. Zheng(正) and Xie(斜) are angle of acupuncture manipulation. The descending inserting of Yang-meridian is acupuncture manipulation for the Tonifying effect(補法) and the direct inserting of Yin-meridian is the Dispersing effect(瀉法). 2. JongYang-Inyin(從陽引陰) is the contralateral acupuncture. 3. Ying(迎) and Sui(隨) in the Saamchimbeop are same meaning the method of reinforcement and reduction(補瀉手技法). 4. Saamchimbeop's the final aim is the Wen-Liang(溫凉) according to the disease strong and weak in the Ohaeng-seo of Saam-acupuncture. Conclusions : Saamchimbeop's method of reinforcement and reduction is reinforcement-reduction by lifting and thrusting the needle, breathing reinforcement-reduction method, reinforcing and reducing achieved by rapid and slow insertion and withdrawing of the needles, reinforcement and reduction by opening and closing of needles with contralateral acupuncture by Yin-meridian or Yang-meridian. Saamchimbeop's the final aim is the Wen-Liang(溫凉) according to the disease strong and weak.