AbstractsComputer Science

Probabilistic graphical models : distributed inference and learning models with small feedback vertex sets

by Ying Liu




Institution: MIT
Department: Department of Electrical Engineering and Computer Science
Year: 2014
Keywords: Electrical Engineering and Computer Science.
Record ID: 2037435
Full text PDF: http://hdl.handle.net/1721.1/89994


Abstract

In undirected graphical models, each node represents a random variable while the set of edges specifies the conditional independencies of the underlying distribution. When the random variables are jointly Gaussian, the models are called Gaussian graphical models (GGMs) or Gauss Markov random fields. In this thesis, we address several important problems in the study of GGMs. The first problem is to perform inference or sampling when the graph structure and model parameters are given. For inference in graphs with cycles, loopy belief propagation (LBP) is a purely distributed algorithm, but it gives inaccurate variance estimates in general and often diverges or has slow convergence. Previously, the hybrid feedback message passing (FMP) algorithm was developed to enhance the convergence and accuracy, where a special protocol is used among the nodes in a pseudo-FVS (an FVS, or feedback vertex set, is a set of nodes whose removal breaks all cycles) while standard LBP is run on the subgraph excluding the pseudo-FVS. In this thesis, we develop recursive FMP, a purely distributed extension of FMP where all nodes use the same integrated message-passing protocol. In addition, we introduce the subgraph perturbation sampling algorithm, which makes use of any pre-existing tractable inference algorithm for a subgraph by perturbing this algorithm so as to yield asymptotically exact samples for the intended distribution. We study the stationary version where a single fixed subgraph is used in all iterations, as well as the non-stationary version where tractable subgraphs are adaptively selected. The second problem is to perform model learning, i.e. to recover the underlying structure and model parameters from observations when the model is unknown. Families of graphical models that have both large modeling capacity and efficient inference algorithms are extremely useful. With the development of new inference algorithms for many new applications, it is important to study the families of models that are most suitable for these inference algorithms while having strong expressive power in the new applications. In particular, we study the family of GGMs with small FVSs and propose structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing an inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. We perform experiments using synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes.