• Title/Summary/Keyword: 환경 강화

Search Result 4,291, Processing Time 0.029 seconds

플레저보트 검사기준 개선 현황

  • Choi, Kyong-il;Seo, Gwang-Cheol;Lee, Seoung-Tae;Jeong, Gyu-Gwon
    • Proceedings of KOSOMES biannual meeting
    • /
    • 2017.11a
    • /
    • pp.10-10
    • /
    • 2017
  • 해양수산부에서 추진중인 해양레저산업 경쟁력 강화를 위한 규제개선 과제 21가지 중 마리나선박의 선박검사기준 개선을 중심으로 해양레저선박 검사기준 개선사항과 시사점을 개략적으로 제시하여 보고자 한다.

  • PDF

실내환경의 기준치 설정

  • 김윤신
    • Environmental engineer
    • /
    • s.61
    • /
    • pp.4-7
    • /
    • 1991
  • 실내공기가 오염되었을 경우는 궁극적으로 환기를 강화시켜야함은 주지의 사실이다. 최근 미국의 ASHRAE에서는(1989년) 1인당 필요한 최소 외기량을 9$m^3$/hr/인에서 27$m^3$/hr/인까지 증가시키고, 흡연실과 금연실을 분리하지 않고 전체적으로 필요한 외기량을 설정하였다.

  • PDF

Welding Technology of Magnesium Alloy for Automobile Industry (자동차 산업에서 마그네슘 합금의 용접기술)

  • 윤병현;장웅성
    • Journal of Welding and Joining
    • /
    • v.22 no.3
    • /
    • pp.23-31
    • /
    • 2004
  • 최근 선진국을 중심으로 세계 각국은 각종 환경 규제를 강화하여 환경오염을 억제하려는 노력을 기울이고 있다. 특히 세계적으로 5억대가 넘는 자동차에서 배출되는 배기가스는 환경오염과 지구 온난화의 주요 원인으로 지적되고 있다.(중략)

Multi-Agent Reinforcement Learning Model based on Fuzzy Inference (퍼지 추론 기반의 멀티에이전트 강화학습 모델)

  • Lee, Bong-Keun;Chung, Jae-Du;Ryu, Keun-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.10
    • /
    • pp.51-58
    • /
    • 2009
  • Reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. In the case of multi-agent, especially, which state space and action space gets very enormous in compared to single agent, so it needs to take most effective measure available select the action strategy for effective reinforcement learning. This paper proposes a multi-agent reinforcement learning model based on fuzzy inference system in order to improve learning collect speed and select an effective action in multi-agent. This paper verifies an effective action select strategy through evaluation tests based on Robocup Keepaway which is one of useful test-beds for multi-agent. Our proposed model can apply to evaluate efficiency of the various intelligent multi-agents and also can apply to strategy and tactics of robot soccer system.

The Improvement of Convergence Rate in n-Queen Problem Using Reinforcement learning (강화학습을 이용한 n-Queen 문제의 수렴속도 향상)

  • Lim SooYeon;Son KiJun;Park SeongBae;Lee SangJo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.1-5
    • /
    • 2005
  • The purpose of reinforcement learning is to maximize rewards from environment, and reinforcement learning agents learn by interacting with external environment through trial and error. Q-Learning, a representative reinforcement learning algorithm, is a type of TD-learning that exploits difference in suitability according to the change of time in learning. The method obtains the optimal policy through repeated experience of evaluation of all state-action pairs in the state space. This study chose n-Queen problem as an example, to which we apply reinforcement learning, and used Q-Learning as a problem solving algorithm. This study compared the proposed method using reinforcement learning with existing methods for solving n-Queen problem and found that the proposed method improves the convergence rate to the optimal solution by reducing the number of state transitions to reach the goal.

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF

Foramniferal Characteristics in the Ganghwa Tidal Flat (강화 남부 갯벌의 유공충 특성)

  • Woo, Han Jun;Lee, Yeon Gyu
    • Journal of Wetlands Research
    • /
    • v.8 no.3
    • /
    • pp.51-65
    • /
    • 2006
  • Surface sediments for sedimentary analyses were sampled at 199 stations in the study area in August 2003. The surface sediments consisted of six sedimentary facies. Generally, sandy mud sediments dominated in the southern tidal flat of Ganghwa Island and sand sediments dominated in channel and subtidal zones of the western part of Ganghwa Island. The area of sandy mud sediment extended to eastward tidal flat compared to sedimentary facies in August 1997. In 30 surface sediment samples from the Ganghwa tidal flat and subtidal zone, 61 species were recorded in total assemblages, including 34 species of living population. Ammonia beccarii and Elphidium etigoense in living population and Ammonia beccarii, Elphidium etigoense, Jadammina sp. and Textularia earlandi in total assemblage were widely distributed. Generally, relatively large numbers of species and high values of species diversity occurred in the area of western part of tidal flat. Cluster analysis of total assemblages discriminates four biofacies. Biofacies 1 indicated eastern part of the tidal flat and biofacies 4 indicated western part of the tidal flat. Biofacies 3 were transitional zone between biofacies 1 and 4.

  • PDF