Meta-interpretive learning of proof strategies
Abstract
In modern mathematics, mechanised theorem proving software is playing an ever increasing role. By enlisting the help of computers mathematicians are able to
formally prove more complex results than they perhaps otherwise could, however
those computers are still incapable of drawing many of the conclusions which would
be obvious to a human user and so human intervention is still required.
In this thesis we consider the use of an adapted machine learning technique to
begin addressing this issue. We consider the use of proof strategies to provide a
high-level view of how a proof is structured, including information about why a
particular step was taken. We extend the Metagol meta-interpretive learning tool
to facilitate learning these strategies. We begin with a small set of examples and
refine our approach, demonstrating the improvements experimentally. We go on to
discuss the learning of more complicated strategies, some of the issues faced in doing
so and how we could address them. We conclude by evaluating the experiments as
a whole, identifying the weak points in our approach and suggesting ways in which
they can be addressed in future work.