Effective extraction of information from massive data stores is increasingly problematic as data quantities continue to grow rapidly. Quite simply, effective techniques for learning from small data do not scale. However, the problem is even worse than this. Big data contain more information than the small data in which context most state-of-the-art learning algorithms have been developed. For small data overly detailed classifiers will overfit the data and so should be avoided. In contrast, big data provide fine detail and hence will benefit new types of learner that can capture it. This project will deliver learners that are not only capable of capturing this detail, but do so with the efficiency required to process terabytes of data.