• Title/Summary/Keyword: Local reinforcement

Search Result 245, Processing Time 0.041 seconds

Nonlinear section model for analysis of RC circular tower structures weakened by openings

  • Lechman, Marek;Stachurski, Andrzej
    • Structural Engineering and Mechanics
    • /
    • v.20 no.2
    • /
    • pp.161-172
    • /
    • 2005
  • This paper presents the section model for analysis of RC circular tower structures based on nonlinear material laws. The governing equations for normal strains due to the bending moment and the normal force are derived in the case when openings are located symmetrically in respect to the bending direction. In this approach the additional reinforcement at openings is also taken into account. The mathematical model is expressed in the form of a set of nonlinear equations which are solved by means of the minimization of the sums of the second powers of the residuals. For minimization the BFGS quasi-Newton and/or Hooke-Jeeves local minimizers suitably modified are applied to take into account the box constraints on variables. The model is verified on the set of data encountered in engineering practice. The numerical examples illustrate the effects of the loading eccentricity and size of the opening on the strains and stresses in concrete and steel in the cross-sections under consideration. Calculated results indicate that the additional reinforcement at the openings increases the resistance capacity of the section by several percent.

Repeated Loading Tests of Reinforced Concrete Beams Containing Headed Shear Reinforcement (Headed Shear Bar를 사용한 콘크리트 보의 반복 하중 실험)

  • 김영훈;윤영수;데니스미첼
    • Proceedings of the Korea Concrete Institute Conference
    • /
    • 2003.05a
    • /
    • pp.512-517
    • /
    • 2003
  • The repeated loading responses of four shear-critical reinforced concrete beams, with two different shear span-to-depth ratios, were studied. One series of beams was reinforced using pairs of bundled stirrups with $90^{\circ}C$ standard hooks, having free end extensions of $6d_b$. The companion beams contained shear reinforcement made with larger diameter headed bars anchored with 50mm diameter circular heads. A single headed bar had the same area as a pair of bundled stirrups and hence the two series were comparable. The test results indicate that beams containing headed bar stirrups have a superior performance to companion beams containing bundled standard stirrups, with improved ductility, larger energy adsorption and enhanced post-peak load carrying capability. Due to splitting of the concrete cover and local crushing, the hooks of the standard stirrups opened, resulting in loss of anchorage. In contrast, the headed bar stirrups did not lose their anchorage and hence were able to develop strain hardening and also served to delay buckling of the flexural compression steel. Excellent load-deflection predictions were obtained by reducing the tension stiffening to account for repeated load effects.

  • PDF

Multi-agent Coordination Strategy Using Reinforcement Learning (강화 학습을 이용한 다중 에이전트 조정 전략)

  • Kim, Su-Hyun;Kim, Byung-Cheon;Yoon, Byung-Joo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.285-288
    • /
    • 2000
  • 본 논문에서는 다중 에이전트(multi-agent) 환경에서 에이전트들의 행동을 효율적으로 조정 (coordination)하기 위해 강화 학습(reinforcement learning)을 이용하였다. 제안된 방법은 각 에이전트가 목표(goal)와의 거리 관계(distance relationship)와 인접 에이전트들과의 공간 관계(spatial relationship)를 이용하였다. 그러므로 각 에이전트는 다른 에이전트와 충돌(collision) 현상이 발생하지 않으면서, 최적의 다음 상태를 선택할 수 있다. 또한, 상태 공간으로부터 입력되는 강화 값이 0과 1 사이의 값을 갖기 때문에 각 에이전트가 선택한 (상태, 행동) 쌍이 얼마나 좋은가를 나타낼 수 있다. 제안된 방법을 먹이 포획 문제(prey pursuit problem)에 적용한 결과 지역 제어(local control)나. 분산 제어(distributed control) 전략을 이용한 방법보다 여러 에이전트들의 행동을 효율적으로 조정할 수 있었으며, 매우 빠르게 먹이를 포획할 수 있음을 알 수 있었다.

  • PDF

Region-based Q- learning For Autonomous Mobile Robot Navigation (자율 이동 로봇의 주행을 위한 영역 기반 Q-learning)

  • 차종환;공성학;서일홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.174-174
    • /
    • 2000
  • Q-learning, based on discrete state and action space, is a most widely used reinforcement Learning. However, this requires a lot of memory and much time for learning all actions of each state when it is applied to a real mobile robot navigation using continuous state and action space Region-based Q-learning is a reinforcement learning method that estimates action values of real state by using triangular-type action distribution model and relationship with its neighboring state which was defined and learned before. This paper proposes a new Region-based Q-learning which uses a reward assigned only when the agent reached the target, and get out of the Local optimal path with adjustment of random action rate. If this is applied to mobile robot navigation, less memory can be used and robot can move smoothly, and optimal solution can be learned fast. To show the validity of our method, computer simulations are illusrated.

  • PDF

Crash Performance of a Straight Member for Various Section Shapes and Local Reinforcement (단면 형상 및 국부 보강에 따른 직선 부재의 충돌 성능)

  • Lee, Hunbong;Kang, Sungjong
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.21 no.5
    • /
    • pp.97-103
    • /
    • 2013
  • Crash performance of the straight member was studied by FE analysis. One end of model was fixed and the other end was impacted by 1,000kg rigid mass with velocity of 16.0m/sec. The maximum and mean load were discussed to compare crash performance. The members with various section shapes were analyzed and the flange location was changed. Also, spot weld points were added in the initial buckling region to investigate its effect. Final rectangular section model which has flanges at the center and reinforcement in initial buckling region showed high enhancement in crash performance.

Reinforcement Location of Plate Girders with Two Longitudinal Stiffeners (플레이트 거더의 2단 수평보강재 보강 위치)

  • Son, Byung-Jik;Lee, Kyu-Hwan
    • Journal of the Korean Society of Safety
    • /
    • v.24 no.6
    • /
    • pp.93-102
    • /
    • 2009
  • Because steel girder bridge has big slenderness ratio, buckling is very important in design. Local buckling of plate girders having two longitudinal stiffeners in different positions under various load conditions is investigated. Various parametric study according to the change of web height, transverse stiffeners and load conditions are examined. These parametric studies are performed by numerical simulation utilizing finite element method. The objective of this study is to present the rational reinforcement location of two longitudinal stiffeners. The results of analysis are compared to that recommended by korean specifications for road bridges(2003).

A Neural Network Model and Reinforcement Learning for Dynamic Formation Moving and Obstacle Avoidance of Autonomous Mobile Robot (자율이동로봇의 동적 편대 헝성과 장애물 회피를 위한 신경망 구조 및 강화학습)

  • Min, Suk-Ki;Shin, Suk-Young;Kang, Hoon
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2189-2192
    • /
    • 1998
  • The objective of this paper is, based upon the principles of artificial life, to induce emergent behaviors of multiple autonomous mobile robots which form from simple local rules to complex global intelligence. Here, we propose an architecture of neural network learing with reinforcement signals which perceives the neighborhood information and decides the direction and the velocity of movement as mobile robots navigates in a group. As results of the simulations, the optimum weights are obtained in real time, which not only prevent from the collisions between agents and obstacles in the dynamic environment, but also have the mobile robots move and keep in various patterns.

  • PDF

Local Path Planning and Obstacle Avoidance System based on Reinforcement Learning (강화학습 기반의 지역 경로 탐색 및 장애물 회피 시스템)

  • Lee, Se-Hoon;Yeom, Dae-Hoon;Kim, Pung-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.59-60
    • /
    • 2019
  • WCS에서 AGV의 스케줄링과 동적, 정적 장애물 인식 및 충돌 회피문제는 오래전부터 다뤄져 온 중요한 문제이다. 본 논문에서는 위의 문제를 해결하기 위해 Lidar 센서를 중심으로 다양한 데이터를 기반으로 한 강화학습 시스템을 제안한다. 제안하는 시스템은 기본의 명시적인 알고리즘에 비해 다양하고 유동적인 환경에서 경로 계획과 동적 정적 장애물을 인식하고 안정적으로 회피하는 것을 확인하였으며 산업 현장에 도입 가능성을 확인하였다. 또한 강화학습의 적용 범위, 적용 방안과 한계에 대해서 시사한다.

  • PDF

Theoretical analysis of chirality and scale effects on critical buckling load of zigzag triple walled carbon nanotubes under axial compression embedded in polymeric matrix

  • Bensattalah, Tayeb;Zidour, Mohamed;Daouadji, Tahar Hassaine;Bouakaz, Khaled
    • Structural Engineering and Mechanics
    • /
    • v.70 no.3
    • /
    • pp.269-277
    • /
    • 2019
  • Using the non-local elasticity theory, Timoshenko beam model is developed to study the non- local buckling of Triple-walled carbon nanotubes (TWCNTs) embedded in an elastic medium under axial compression. The chirality and small scale effects are considered. The effects of the surrounding elastic medium based on a Winkler model and van der Waals' (vdW) forces between the inner and middle, also between the middle and outer nanotubes are taken into account. Considering the small-scale effects, the governing equilibrium equations are derived and the critical buckling loads under axial compression are obtained. The results show that the critical buckling load can be overestimated by the local beam model if the small-scale effect is overlooked for long nanotubes. In addition, significant dependence of the critical buckling loads on the chirality of zigzag carbon nanotube is confirmed. Furthermore, in order to estimate the impact of elastic medium on the non-local critical buckling load of TWCNTs under axial compression, the use of these findings are important in mechanical design considerations, improve and reinforcement of devices that use carbon nanotubes.

Application of reinforcement learning to hyper-redundant system Acquisition of locomotion pattern of snake like robot

  • Ito, K.;Matsuno, F.
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.65-70
    • /
    • 2001
  • We consider a hyper-redundant system that consists of many uniform units. The hyper-redundant system has many degrees of freedom and it can accomplish various tasks. Applysing the reinforcement learning to the hyper-redundant system is very attractive because it is possible to acquire various behaviors for various tasks automatically. In this paper we present a new reinforcement learning algorithm "Q-learning with propagation of motion". The algorithm is designed for the multi-agent systems that have strong connections. The proposed algorithm needs only one small Q-table even for a large scale system. So using the proposed algorithm, it is possible for the hyper-redundant system to learn the effective behavior. In this algorithm, only one leader agent learns the own behavior using its local information and the motion of the leader is propagated to another agents with time delay. The reward of the leader agent is given by using the whole system information. And the effective behavior of the leader is learned and the effective behavior of the system is acquired. We apply the proposed algorithm to a snake-like hyper-redundant robot. The necessary condition of the system to be Markov decision process is discussed. And the computer simulation of learning the locomotion is demonstrated. From the simulation results we find that the task of the locomotion of the robot to the desired point is learned and the winding motion is acquired. We can conclude that our proposed system and our analysis of the condition, that the system is Markov decision process, is valid.

  • PDF