Quantifying the Difficulty of Easy Decision-Theoretic Planning Problems


How difficult will a problem be to solve? How much computer memory will I need to represent its solution? These are the types of questions we seek to answer in this project. In AI, these are not easy questions to answer.

The task would be to design, implement and study algorithms that can answer such questions.

This project will examine techniques to compute features of policies and value functions in decision-theoretic planning, without ever explicitly computing those solution objects. The work is motivated by the fact that the features we shall examine are indicative of the difficulty of solving the problem.


Design, implementation and evaluation of algorthms that tractably determine features of problem difficulty.


Strong programming,  data-structures and discrete maths skills. Passion for artificial intelligence.

Background Literature

M. Abdulaziz, C. Gretton and M. Norrish. A State Space Acyclicity Property for Exponentially Tighter Plan Length Bounds. ICAPS, 2017.

C. Gretton and S. Thiébaux. Exploiting First-Order Regression in Inductive Policy Selection. UAI 2004.

C. Boutilier, R. Dearden and M. Goldszmidt, Stochastic dynamic programming with factored representations, Artificial Intelligence, 2001.



Experience undertaking an empirically-focused  research and development project.


artificial intelligence, decision-theoretic planning, reinforcement learning, simulation

Updated:  1 November 2018/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing