Abstract
An autonomous agent is one that is not only directed by its environment, but is also driven by internal motivation to achieve certain goals based on beliefs about the environmental behaviour. Design paradigms for autonomous agents such as belief-desire-intention take into account the agent’s “mental” features when presenting its patterns of behaviour. In this paper we present an approach to modelling autonomous agents by introducing mental features to conventional transition system specifications. Mental features such as belief and desire are represented by declarative linear temporal logic formulas. Refinement is then proposed to define the correctness of the agent design and development. It turns out, however, that the introduction of these mental features is not monotonic with respect to refinement. We therefore introduce additional refinement proof obligations to enable the use of simulation rules when checking refinement.
| Original language | English |
|---|---|
| Pages (from-to) | 227-249 |
| Number of pages | 23 |
| Journal | Formal Aspects of Computing |
| Volume | 29 |
| Issue number | 2 |
| DOIs | |
| State | Published - 1 Mar 2017 |
Keywords
- Autonomous agents
- Belief and desire
- Object-Z
- Refinement
- Temporal logic