Counter example guided planning in uncertain environments

Research areas



Artificial intelligence planning is the problem of choosing an agent's actions in order to achieve a specified goal.  The input of such problems is an initial state, a set of actions (that includes a description of their effects), and a goal.

In the present project we are interested in ``conformant problems'', that is, problems in which the initial state is not known precisely.  Instead the input is a set of states (a ``belief state''), one of
which is initial.  The solution is a sequence of actions that is guaranteed to lead to the goal, regardless of the actual initial state.  The main issue with conformant planning is the complexity of the problem (EXPTIME-hard for the decision problem).

Traditionally conformant planning has been solved by reasoning about what is known about the current state.  Typical of this approach is a (let us call it ``tag-and-merge'') translation to classical planning where the state is described by facts such as KL/t, with the meaning ``if the /assumption/ (or tag) t was true in the initial state, then it is known that L holds in the current state'' (L can be a fact such as ``the agent is in the corridor'' or ``the window is open'').  This type of approach has been proved efficient in many natural conformant planning problems, but it is dependent on the structure in the planning problem; specifically it only handles problems where very few tags are relevant.

We recently proposed a new approach to conformant planning.  Our conformant planner, CPCES, searches for a plan that is valid for a few initial states (the ``sample''); because the sample is reasonnably small (dozens of states compared to the billions of states in the initial belief state) this search often succeeds fast.  We then verify the validity of this candidate plan on the complete initial belief state by using inexpensive regression techniques.  If the candidate plan is not valid, the regression exhibits an initial state (a counter example) for which the plan fails; this initial state can be added to the sample, and the search for a plan resumes.  Using this ``counter example guided'' sampling often allows us to find good (i.e., representative yet small) samples.

This framework proved to be strong for a large range of problems that are hard for more traditional planners, in particular those that have a large ``width'' (i.e., that would require a large number of
assumptions).  This is very encouraging as the current implementation is extremely simple and does use any clever trick.  We believe that this approach has potential, in particular for non-trivial problems.

The Project

There remain many open questions and potential improvements to CPCES that we believe are worthy of a PhD project.  These questions include:

  • What are the theoretical connections between CPCES on one side, and the conformant planning width and the tag-and-merge method on the other side?  For instance, each state in the sample can be understood as a super tag.  Furthermore the number of relevant tags has been proved to be bounded exponentially by the width.  Answering these questions will provide better insight as to why and when CPCES can be expected to perform well, and how to improve the algorithm.
  • How to generate good/optimal samples?  The sample comprises states that are counter examples from previous candidate plans.  Choosing good counter examples is very important as the next candidate plans will be computed from this sample.  As a rule of thumb, we want counter examples that illustrate as many ``aspects'' of the problem as possible.  If, for instance, the problem involves making sure that two machines are turned off and the candidate plan does not account for the possibility that their initial state is unknown, then a counter example in which both machines are on would be better than one in which only one machine is on (all other things being equals).
  • How to extend this approach to conditional planning (extension where the system is partially observable; the effects of the actions can be observed)?  to probabilistic planning (where the probability of success does not need to be 100% but only above some threshold)?

Background Literature

Alban Grastien and Enrico Scala. Intelligent belief state sampling for conformant planning. In International Joint Conference on Artificial Intelligence (IJCAI-17), pages 4317-4323, 2017

and most references therein.


Artificial Intelligence


Conformant planning

Conditional planning

Contingent planning

Probabilistic planning

Knowledge representation and reasoning



Updated:  1 June 2019/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing