Knowing precisely when and why an optimisation algorithm will perform well is crucial to avoiding deployment disasters, gaining theoretical insights to improve algorithm design, and ensuring that algorithm performance is robustly described independent of chosen test instances. The project will develop the first-ever methodologies for learning and visualising the boundaries of algorithm performance (footprints) in a high-dimensional instance space. Using these methodologies, we will gain a deep understanding of the complex interplay between problem formulation, optimisation techniques and parameter settings. We will thus make major practical, theoretical, and methodological advances to transform the empirical science of optimisation.