Lazy model expansion: interleaving grounding with search

Broes De Cat, Marc Denecker, Peter Stuckey, Maurice Bruynooghe

Research output: Contribution to journalArticleResearchpeer-review

19 Citations (Scopus)

Abstract

Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowl-edge representation languages like Answer Set Programming (ASP) and FO(.-) and a CSP modeling language such as Zinc, is ground-and-solve : reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.-) knowledge representation language. Instead of grounding all parts of a theory, justi-cations are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justi-cations for the formulas of the non-grounded part, the justi-cations provide a recipe to construct a complete assignment that satis-es the non-grounded part. When a justi-cation for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justi-ed. Experimental results illustrate the power and generality of this approach.

Original languageEnglish
Pages (from-to)235-286
Number of pages52
JournalJournal of Artificial Intelligence Research
Volume52
DOIs
Publication statusPublished - 25 Feb 2015
Externally publishedYes

Cite this