GINN-LP: A Growing Interpretable Neural Network for discovering multivariate Laurent Polynomial equations

Nisal Ranasinghe, Damith Senanayake, Sachith Seneviratne, Malin Premaratne, Saman Halgamuge

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

1 Citation (Scopus)

Abstract

Traditional machine learning is generally treated as a black-box optimization problem and does not typically produce interpretable functions that connect inputs and outputs. However, the ability to discover such interpretable functions is desirable. In this work, we propose GINN-LP, an interpretable neural network to discover the form and coefficients of the underlying equation of a dataset, when the equation is assumed to take the form of a multivariate Laurent Polynomial. This is facilitated by a new type of interpretable neural network block, named the “power-term approximator block”, consisting of logarithmic and exponential activation functions. GINN-LP is end-to-end differentiable, making it possible to use backpropagation for training. We propose a neural network growth strategy that will enable finding the suitable number of terms in the Laurent polynomial that represents the data, along with sparsity regularization to promote the discovery of concise equations. To the best of our knowledge, this is the first model that can discover arbitrary multivariate Laurent polynomial terms without any prior information on the order. Our approach is first evaluated on a subset of data used in SRBench, a benchmark for symbolic regression. We first show that GINN-LP outperforms the state-of-the-art symbolic regression methods on datasets generated using 48 real-world equations in the form of multivariate Laurent polynomials. Next, we propose an ensemble method that combines our method with a high-performing symbolic regression method, enabling us to discover non-Laurent polynomial equations. We achieve state-of-the-art results in equation discovery, showing an absolute improvement of 7.1% over the best contender, by applying this ensemble method to 113 datasets within SRBench with known ground-truth equations.

Original languageEnglish
Title of host publicationThirty-Eighth AAAI Conference on Artificial Intelligence Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence Fourteenth Symposium on Educational Advances in Artificial Intelligence
EditorsJennifer Dy , Sriraam Natarajan
Place of PublicationWashington DC USA
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages14776-14784
Number of pages9
ISBN (Electronic)9781577358879, 1577358872
DOIs
Publication statusPublished - 2024
EventAAAI Conference on Artificial Intelligence 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024
Conference number: 38th
https://ojs.aaai.org/index.php/AAAI/issue/view/588 (AAAI-24 Technical Tracks 13)
https://ojs.aaai.org/index.php/AAAI/issue/view/589 (AAAI-24 Technical Tracks 14)
https://ojs.aaai.org/index.php/AAAI/issue/view/593 (AAAI-24 Technical Tracks 18)
https://aaai.org/aaai-conference/ (Website)

Publication series

NameProceedings of the AAAI Conference on Artificial Intelligence
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Number13
Volume38
ISSN (Print)2159-5399
ISSN (Electronic)2374-3468

Conference

ConferenceAAAI Conference on Artificial Intelligence 2024
Abbreviated titleAAAI 2024
Country/TerritoryCanada
CityVancouver
Period20/02/2427/02/24
Internet address

Keywords

  • ML
  • Transparent
  • Interpretable
  • Explainable ML
  • Ensemble Methods

Cite this