Leveraging task-parallelism in message-passing dense matrix factorizations using SMPSs

Alberto F. Martín, Ruymán Reyes, Rosa M. Badia, Enrique S. Quintana-Ortí

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)


In this paper, we investigate how to exploit task-parallelism during the execution of the Cholesky factorization on clusters of multicore processors with the SMPSs programming model. Our analysis reveals that the major difficulties in adapting the code for this operation in ScaLAPACK to SMPSs lie in algorithmic restrictions and the semantics of the SMPSs programming model, but also that they both can be overcome with a limited programming effort. The experimental results report considerable gains in performance and scalability of the routine parallelized with SMPSs when compared with conventional approaches to execute the original ScaLAPACK implementation in parallel as well as two recent message-passing routines for this operation. In summary, our study opens the door to the possibility of reusing message-passing legacy codes/libraries for linear algebra, by introducing up-to-date techniques like dynamic out-of-order scheduling that significantly upgrade their performance, while avoiding a costly rewrite/reimplementation.

Original languageEnglish
Pages (from-to)113-128
Number of pages16
JournalParallel Computing
Issue number5-6
Publication statusPublished - May 2014
Externally publishedYes


  • Clusters of multi-core processors
  • Linear algebra
  • Message-passing numerical libraries
  • Task parallelism

Cite this