• 제목/요약/키워드: Reinforcement methods

검색결과 994건 처리시간 0.027초

프리스트레스 및 강관보강 그라우팅을 이용한 터널 필라부 보강공법의 적용성 검토를 위한 축소모형 실험 (Reduced model experiment to review applicability of tunnel pillar reinforcement method using prestress and steel pipe reinforcement grouting)

  • 김연덕;이수진;이평우;윤홍수;김상환
    • 한국터널지하공간학회 논문집
    • /
    • 제24권6호
    • /
    • pp.495-512
    • /
    • 2022
  • 본 연구에서는 프리스트레스와 그라우팅을 이용한 터널 필라부 보강공법에 대해 검토하였다. 병설터널의 문제점을 보완할 수 있는 보강법들은 다양하지만 프리스트레스와 그라우팅을 이용한 터널 필라부 공법이 현장적용성, 안정성, 경제성에서 우수하다고 판단됨에 따라 실질적인 거동 메커니즘의 이론적 검토 및 수치해석적 검토가 필요하기 때문에 축소모형 실험을 진행하였다. 축소모형 실험은 PC강연선 + 강관보강 그라우팅 + 프리스트레스(Case 1), PC강연선 + 강관보강 그라우팅(Case 2), 무보강(Case 3)으로 나누어 필라부의 변위와 벽체에 가해지는 토압을 측정하였다. 실험을 통하여 여러 공법 중 PC강연선 + 강관보강 그라우팅 + 프리스트레스 공법은 가장 우수한 보강공법임을 확인하였고 추후 현장실험을 통해 이를 검증하고 보완해 나간다면 현재 적용되는 보강공법보다 변위 제어 및 부재력 측면에서 우수한 결과를 도출해 낼 수 있을 것이라 판단되었다.

구조물 노후도를 반영한 외부긴장 보강 효과에 관한 실험적 연구 (Experimental Study on the Strengthening Effect of External Prestressing Method Considering Deterioration)

  • 김상현;정우태;강재윤;박희범;박종섭
    • 한국구조물진단유지관리공학회 논문집
    • /
    • 제25권1호
    • /
    • pp.1-6
    • /
    • 2021
  • 콘크리트 구조물은 재료의 열화나 초과된 하중 및 환경적 요인에 의해 점차 노후화 되며 그 성능이 감소하여 구조물의 사용성 및 안전성에 영향을 미치게 된다. 노후 교량의 보강 공법 중 외부긴장 공법이 널리 사용 중이지만, 노후도에 따른 보강 효과 및 영향 규명은 미흡한 실정이다. 따라서 이 연구에서는 구조물의 노후도를 콘크리트 압축강도 및 인장철근량의 감소로 가정하고 노후도에 따른 외부긴장 공법의 보강 효과를 확인하기 위해 무보강 및 외부긴장 공법을 적용한 실험체의 4점 재하실험을 수행하여 보강 여부에 따른 거동을 분석하고 보강 효과를 확인하였다. 실험 결과 정착구의 조기 탈락에 따른 극한 상태의 보강량을 확인하기 어려웠으며, 이에 따라 외부긴장 보강 공법의 적용 시 앵커 볼트에 관한 규정 준수가 필요하다. 외부긴장 보강 여부에 따라 균열하중 및 항복하중이 증가하였으나, 균열 이전에는 보강 전, 후의 강성이 유사하여 보강 효과를 확인하긴 어려웠다.

Direct design of partially prestressed concrete solid beams

  • Alnuaimi, A.S.
    • Structural Engineering and Mechanics
    • /
    • 제27권6호
    • /
    • pp.741-771
    • /
    • 2007
  • Tests were conducted on two partially pre-stressed concrete solid beams subjected to combined loading of bending, shear and torsion. The beams were designed using the Direct Design Method which is based on the Lower Bound Theorem of the Theory of Plasticity. Both beams were of $300{\times}300mm$ cross-section and 3.8 m length. The two main variables studied were the ratio of the maximum shear stress due to the twisting moment, to the shear stress arising from the shear force, which was varied between 0.69 and 3.04, and the ratio of the maximum twisting moment to the maximum bending moment which was varied between 0.26 and 1.19. The required reinforcement from the Direct Design Method was compared with requirements from the ACI and the BSI codes. It was found that, in the case of bending dominance, the required longitudinal reinforcements from all methods were close to each other while the BSI required much larger transverse reinforcement. In the case of torsion dominance, the BSI method required much larger longitudinal and transverse reinforcement than the both the ACI and the DDM methods. The difference in the transverse reinforcement is more pronounce. Experimental investigation showed good agreement between design and experimental failure loads of the beams designed using the Direct Design Method. Both beams failed within an acceptable range of the design loads and underwent ductile behaviour up to failure. The results indicate that the Direct Design Method can be successfully used to design partially prestressed concrete solid beams which cater for the combined effect of bending, shear and torsion loads.

게임 인공지능에 사용되는 강화학습 알고리즘 비교 (Comparison of Reinforcement Learning Algorithms used in Game AI)

  • 김덕형;정현준
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.693-696
    • /
    • 2021
  • 강화학습에는 다양한 알고리즘이 있으며 분야에 따라 사용되는 알고리즘이 다르다. 게임 분야에서도 강화학습을 사용하여 인공지능을 개발할 때 특정 알고리즘이 사용된다. 알고리즘에 따라 학습 방식이 다르고 그로 인해 만들어지는 인공지능도 달라진다. 그러므로 개발자는 목적에 맞는 인공지능을 구현하기 위해 적절한 알고리즘을 선택해야 한다. 그러기 위해서 개발자는 알고리즘의 학습 방식과 어떤 종류의 인공지능 구현에 적용되는 것이 효율적인지 알고 있어야 한다. 따라서 이 논문에서는 게임 인공지능 구현에 사용되는 알고리즘인 SAC, PPO, POCA 세 가지 알고리즘의 학습 방식과 어떤 종류의 인공지능 구현에 적용되는 것이 효율적인지 비교한다.

  • PDF

Aspect-based Sentiment Analysis of Product Reviews using Multi-agent Deep Reinforcement Learning

  • M. Sivakumar;Srinivasulu Reddy Uyyala
    • Asia pacific journal of information systems
    • /
    • 제32권2호
    • /
    • pp.226-248
    • /
    • 2022
  • The existing model for sentiment analysis of product reviews learned from past data and new data was labeled based on training. But new data was never used by the existing system for making a decision. The proposed Aspect-based multi-agent Deep Reinforcement learning Sentiment Analysis (ADRSA) model learned from its very first data without the help of any training dataset and labeled a sentence with aspect category and sentiment polarity. It keeps on learning from the new data and updates its knowledge for improving its intelligence. The decision of the proposed system changed over time based on the new data. So, the accuracy of the sentiment analysis using deep reinforcement learning was improved over supervised learning and unsupervised learning methods. Hence, the sentiments of premium customers on a particular site can be explored to other customers effectively. A dynamic environment with a strong knowledge base can help the system to remember the sentences and usage State Action Reward State Action (SARSA) algorithm with Bidirectional Encoder Representations from Transformers (BERT) model improved the performance of the proposed system in terms of accuracy when compared to the state of art methods.

Creating damage tolerant intersections in composite structures using tufting and 3D woven connectors

  • Clegg, Harry M.;Dell'Anno, Giuseppe;Partridge, Ivana K.
    • Advances in aircraft and spacecraft science
    • /
    • 제6권2호
    • /
    • pp.145-156
    • /
    • 2019
  • As the industrial desire for a step change in productivity within the manufacture of composite structures increases, so does the interest in Through-Thickness Reinforcement technologies. As manufacturers look to increase the production rate, whilst reducing cost, Through-Thickness Reinforcement technologies represent valid methods to reinforce structural joints, as well as providing a potential alternative to mechanical fastening and bolting. The use of tufting promises to resolve the typically low delamination resistance, which is necessary when it comes to creating intersections within complex composite structures. Emerging methods include the use of 3D woven connectors, and orthogonally intersecting fibre packs, with the components secured by the selective insertion of microfasteners in the form of tufts. Intersections of this type are prevalent in aeronautical applications, as a typical connection to be found in aircraft wing structures, and their intersections with the composite skin and other structural elements. The common practice is to create back-to-back composite "L's", or to utilise a machined metallic connector, mechanically fastened to the remainder of the structure. 3D woven connectors and selective Through-Thickness Reinforcement promise to increase the ultimate load that the structure can bear, whilst reducing manufacturing complexity, increasing the load carrying capability and facilitating the automated production of parts of the composite structure. This paper provides an overview of the currently available methods for creating intersections within composite structures and compares them to alternatives involving the use of 3D woven connectors, and the application of selective Through-Thickness Reinforcement for enhanced damage tolerance. The use of tufts is investigated, and their effect on the load carrying ability of the structure is examined. The results of mechanical tests are presented for each of the methods described, and their failure characteristics examined.

Actor-Critic Reinforcement Learning System with Time-Varying Parameters

  • Obayashi, Masanao;Umesako, Kosuke;Oda, Tazusa;Kobayashi, Kunikazu;Kuremoto, Takashi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.138-141
    • /
    • 2003
  • Recently reinforcement learning has attracted attention of many researchers because of its simple and flexible learning ability for any environments. And so far many reinforcement learning methods have been proposed such as Q-learning, actor-critic, stochastic gradient ascent method and so on. The reinforcement learning system is able to adapt to changes of the environment because of the mutual action with it. However when the environment changes periodically, it is not able to adapt to its change well. In this paper we propose the reinforcement learning system that is able to adapt to periodical changes of the environment by introducing the time-varying parameters to be adjusted. It is shown that the proposed method works well through the simulation study of the maze problem with aisle that opens and closes periodically, although the conventional method with constant parameters to be adjusted does not works well in such environment.

  • PDF

A Joint Allocation Algorithm of Computing and Communication Resources Based on Reinforcement Learning in MEC System

  • Liu, Qinghua;Li, Qingping
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.721-736
    • /
    • 2021
  • For the mobile edge computing (MEC) system supporting dense network, a joint allocation algorithm of computing and communication resources based on reinforcement learning is proposed. The energy consumption of task execution is defined as the maximum energy consumption of each user's task execution in the system. Considering the constraints of task unloading, power allocation, transmission rate and calculation resource allocation, the problem of joint task unloading and resource allocation is modeled as a problem of maximum task execution energy consumption minimization. As a mixed integer nonlinear programming problem, it is difficult to be directly solve by traditional optimization methods. This paper uses reinforcement learning algorithm to solve this problem. Then, the Markov decision-making process and the theoretical basis of reinforcement learning are introduced to provide a theoretical basis for the algorithm simulation experiment. Based on the algorithm of reinforcement learning and joint allocation of communication resources, the joint optimization of data task unloading and power control strategy is carried out for each terminal device, and the local computing model and task unloading model are built. The simulation results show that the total task computation cost of the proposed algorithm is 5%-10% less than that of the two comparison algorithms under the same task input. At the same time, the total task computation cost of the proposed algorithm is more than 5% less than that of the two new comparison algorithms.

커리큘럼을 이용한 투서클 기반 항공기 헤드온 공중 교전 강화학습 기법 연구 (Two Circle-based Aircraft Head-on Reinforcement Learning Technique using Curriculum)

  • 황인수;배정호
    • 한국군사과학기술학회지
    • /
    • 제26권4호
    • /
    • pp.352-360
    • /
    • 2023
  • Recently, AI pilots using reinforcement learning are developing to a level that is more flexible than rule-based methods and can replace human pilots. In this paper, a curriculum was used to help head-on combat with reinforcement learning. It is not easy to learn head-on with a reinforcement learning method without a curriculum, but in this paper, through the two circle-based head-on air combat learning technique, ownship gradually increase the difficulty and become good at head-on combat. On the two-circle, the ATA angle between the ownship and target gradually increased and the AA angle gradually decreased while learning was conducted. By performing reinforcement learning with and w/o curriculum, it was engaged with the rule-based model. And as the win ratio of the curriculum based model increased to close to 100 %, it was confirmed that the performance was superior.

사암침법(舍巖鍼法)의 보사수기법(補瀉手技法)에 관한 연구(硏究) (The Study of Saamchimbeop's Method of Reinforcement and Reduction)

  • 안정란;이인선
    • 한방재활의학과학회지
    • /
    • 제19권2호
    • /
    • pp.113-123
    • /
    • 2009
  • Objectives : The purpose of this study is what Saamchimbeop's method of reinforcement and reduction. Methods : 1. We reffered to the Bo-Sa method of DongeuiBo-gam(東醫寶鑑), Uihakim-mun(醫學入門), Uihakjeong-jeon(醫學正傳), Chimgugyeongheom-bang(鍼灸經驗方), Biaoyou-fu(標幽賦) in Cimgudaeseong(鍼灸大成), Nei-Jing(內經). 2. We make a conjecture that Zheng(正), Ying(迎), Sui(隨), Xie(斜) Yingzheng(迎正), Duo(奪), Zhenghuoxie(正或斜), Wen(溫), Liang(凉), JongYang-Inyin (從陽引陰) in Saamchimbeop are another expression of method of reinforcement and reduction and compared with the method of reinforcement and reduction of DongeuiBo-gam(東醫寶鑑), Uihakim-mun(醫學入門), Uihakjeong-jeon(醫學正傳), Chimgugyeongheom-bang(鍼灸經驗方), Biaoyou-fu(標幽賦) in Cimgudaeseong(鍼灸大成), Nei-Jing(內經). Results : 1. Zheng(正) and Xie(斜) are angle of acupuncture manipulation. The descending inserting of Yang-meridian is acupuncture manipulation for the Tonifying effect(補法) and the direct inserting of Yin-meridian is the Dispersing effect(瀉法). 2. JongYang-Inyin(從陽引陰) is the contralateral acupuncture. 3. Ying(迎) and Sui(隨) in the Saamchimbeop are same meaning the method of reinforcement and reduction(補瀉手技法). 4. Saamchimbeop's the final aim is the Wen-Liang(溫凉) according to the disease strong and weak in the Ohaeng-seo of Saam-acupuncture. Conclusions : Saamchimbeop's method of reinforcement and reduction is reinforcement-reduction by lifting and thrusting the needle, breathing reinforcement-reduction method, reinforcing and reducing achieved by rapid and slow insertion and withdrawing of the needles, reinforcement and reduction by opening and closing of needles with contralateral acupuncture by Yin-meridian or Yang-meridian. Saamchimbeop's the final aim is the Wen-Liang(溫凉) according to the disease strong and weak.