Project Details
Project Description
Parallelization of constraint programming is reasonably well understood, and many
approaches typically give rise to linear or super-linear speedups. And nogood learning tech-
niques an yield exponential improvements by adding constraints which block off subproblems.
But these two techniques interact poorly: synchronizing nogoods requires large communica-
tion costs, but if we don’t then workers will explore subproblems already proven infeasible by
someone else. Instead, we propose an alternate strategy. Instead of communicating nogoods
between workers, we instead move jobs to workers who already have relevant nogoods. In this
way, we hope to scale up parallel solving without sacrificing the exponential gains of nogood
learning.
approaches typically give rise to linear or super-linear speedups. And nogood learning tech-
niques an yield exponential improvements by adding constraints which block off subproblems.
But these two techniques interact poorly: synchronizing nogoods requires large communica-
tion costs, but if we don’t then workers will explore subproblems already proven infeasible by
someone else. Instead, we propose an alternate strategy. Instead of communicating nogoods
between workers, we instead move jobs to workers who already have relevant nogoods. In this
way, we hope to scale up parallel solving without sacrificing the exponential gains of nogood
learning.
Status | Finished |
---|---|
Effective start/end date | 30/03/20 → 29/03/21 |
Keywords
- discrete optimization
- lazy clause generation
- parallellization