Discovering dependencies with reliable mutual information

Panagiotis Mandros, Mario Boley, Jilles Vreeken

Research output: Contribution to journalArticleResearchpeer-review

Abstract

We consider the task of discovering functional dependencies in data for target attributes of interest. To solve it, we have to answer two questions: How do we quantify the dependency in a model-agnostic and interpretable way as well as reliably against sample size and dimensionality biases? How can we efficiently discover the exact or α-approximate top-k dependencies? We address the first question by adopting information-theoretic notions. Specifically, we consider the mutual information score, for which we propose a reliable estimator that enables robust optimization in high-dimensional data. To address the second question, we then systematically explore the algorithmic implications of using this measure for optimization. We show the problem is NP-hard and justify worst-case exponential-time as well as heuristic search methods. We propose two bounding functions for the estimator, which we use as pruning criteria in branch-and-bound search to efficiently mine dependencies with approximation guarantees. Empirical evaluation shows that the derived estimator has desirable statistical properties, the bounding functions lead to effective exact and greedy search algorithms, and when combined, qualitative experiments show the framework indeed discovers highly informative dependencies.

Original languageEnglish
Pages (from-to)4223–4253
Number of pages31
JournalKnowledge and Information Systems
Volume62
DOIs
Publication statusPublished - 24 Jul 2020

Keywords

  • Algorithms
  • Approximate functional dependency
  • Branch-and-bound
  • Information theory
  • Knowledge discovery
  • Pattern mining

Cite this