Efficient learning of discrete graphical models
Graphical models are useful tools for describing structured high-dimensional probability distributions. Development of efficient algorithms for learning graphical models with the least amount of data remains an active research topic. When the variables described by the graphical models are discrete, the learning problem is particularly challenging since the classical maximum likelihood approach is intractable. We present a general framework based on the so-called interaction screening objective that allows one to provably learn fully general discrete graphical models with node-specific discrete alphabets and multi-body interactions, specified in an arbitrary basis. We identify a single condition related to the model parametrization that leads to rigorous guarantees on the recovery of model structure and parameters in any error norm. We show that the condition is readily verifiable for a large class of models including several well-known special cases such as the Ising and Potts models.