• Title/Summary/Keyword: FSM (Finite State Machines)

Search Result 14, Processing Time 0.019 seconds

Multi-protocol Test Method:MPTM (다중계층 프로토콜 시험 방법)

  • Lee, Soo-In;Park, Yong-Bum;Kim, Myung-Chul
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.3
    • /
    • pp.377-388
    • /
    • 2001
  • An approach for testing multi-protocol Implementation Under Test (IUT) with a single test suite has been proposed in[1]. this paper proposes an algorithm called Multi-protocol Test Method (MPTM) for automatic test case generation based on that approach. With the MPTM, a multi-protocol IUT consisting of two protocol layers is modeled as two Finite State Machines (FSMs), and the relationships between the transitions of the two FSMs are defined as a set of transition relationships pre-execution and carried-by. The proposed algorithm is implemented and applied to a simplified TCP/IP and B-ISDN Signaling/SSCOP. MPTM is able to test the multi-protocol IUT even though the interfaces between the protocol layers are not exposed. It results in that the proposed MPTM allows the same test coverage as conventional test methods even with fewer numbers of test cases.

  • PDF

Embedded Software Minimization Using Don′t Cares (Don′t Care 정보를 이용한 임베디드 소프트웨어의 최적화)

  • Hong, Yu-Pyo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.3
    • /
    • pp.48-54
    • /
    • 2000
  • This paper exploits the use of don't cares on software synthesis for embedded systems. Embedded systems have extremely tight real-time and code size constraints. We propose applying BDD minimization techniques in the presence of a don't care set to synthesize code for extended Finite State Machines from a BDD-based representation of the FSM transition function. The don't care set can be derived from local analysis as well as from external information. We show experimental results, discuss their implications, the interactions between BDD-based minimization and dynamic variable reordering, and propose directions for future research.

  • PDF

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF

Policy Modeling for Efficient Reinforcement Learning in Adversarial Multi-Agent Environments (적대적 멀티 에이전트 환경에서 효율적인 강화 학습을 위한 정책 모델링)

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.3
    • /
    • pp.179-188
    • /
    • 2008
  • An important issue in multiagent reinforcement learning is how an agent should team its optimal policy through trial-and-error interactions in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for multiagent reinforcement teaming tend to apply single-agent reinforcement learning techniques without any extensions or are based upon some unrealistic assumptions even though they build and use explicit models of other agents. In this paper, basic concepts that constitute the common foundation of multiagent reinforcement learning techniques are first formulated, and then, based on these concepts, previous works are compared in terms of characteristics and limitations. After that, a policy model of the opponent agent and a new multiagent reinforcement learning method using this model are introduced. Unlike previous works, the proposed multiagent reinforcement learning method utilize a policy model instead of the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper. the Cat and Mouse game is introduced as an adversarial multiagent environment. And effectiveness of the proposed multiagent reinforcement learning method is analyzed through experiments using this game as testbed.