Information-Theoretic Sequential Decision Making: Stochastic and Deterministic Approaches
A natural goal in many sequential decision making processes is to identify a sequence of decisions that reduce uncertainty about some quantity of interest. In the Bayesian setting this problem can be formalized as sequentially maximizing the mutual information (MI) between observations and unknowns. Unfortunately, MI is challenging to estimate or bound in all but the simplest models. This talk will present recent work on algorithms for sequential MI maximization using, both, sample-based estimators and variational approximations. After characterizing the problem, I will present sample-based methods based on sequential M-estimation. Using techniques from the literature on robust statistics I will show that this approach achieves superior accuracy compared to standard Monte carlo methods in the finite sample regime. For larger problem instances I will present a variational approach, which sequentially optimizes lower bounds on MI to yield high-quality decisions with minimal computation. I will demonstrate the strengths and weaknesses of each method on interesting problems, such as gene regulatory network inference, active learning for semi-supervised topic models, and target tracking in a sensor field.
BIO: Jason Pacheco is an assistant professor in the department of Computer Science. Jason’s research interests are in statistical machine learning, probabilistic graphical models, approximate inference algorithms, and information-theoretic decision making. Prior to joining the University of Arizona Jason was a postdoctoral associate at MIT in the Computer Science and Artificial Intelligence Laboratory (CSAIL) with John Fisher III. Jason completed his PhD work at Brown University with Erik Sudderth.