Statistics - Computation Publications (50)

Search

Statistics - Computation Publications

2017Apr
Affiliations: 1University of California Davis, 2University of Duisburg-Essen

In this note we describe experiments on an implementation of two methods proposed in the literature for computing regions that correspond to a notion of order statistics for multidimensional data. Our implementation, which works for any dimension greater than one, is the only that we know of to be publicly available. Experiments run using the software confirm that half-space peeling generally gives better results than directly peeling convex hulls, but at a computational cost. Read More


In many real problems, dependence structures more general than exchangeability are required. For instance, in some settings partial exchangeability is a more reasonable assumption. For this reason, vectors of dependent Bayesian nonparametric priors have recently gained popularity. Read More


BDSAR is an R package which estimates distances between probability distributions and facilitates a dynamic and powerful analysis of diagnostics for Bayesian models from the class of Simultaneous Autoregressive (SAR) spatial models. The package offers a new and fine plot to compare models as well as it works in an intuitive way to allow any analyst to easily build fine plots. These are helpful to promote insights about influential observations in the data. Read More


This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Read More


The scalable calculation of matrix determinants has been a bottleneck to the widespread application of many machine learning methods such as determinantal point processes, Gaussian processes, generalised Markov random fields, graph models and many others. In this work, we estimate log determinants under the framework of maximum entropy, given information in the form of moment constraints from stochastic trace estimation. The estimates demonstrate a significant improvement on state-of-the-art alternative methods, as shown on a wide variety of UFL sparse matrices. Read More


The ensemble Kalman filter (EnKF) is a computational technique for approximate inference on the state vector in spatio-temporal state-space models. It has been successfully used in many real-world nonlinear data-assimilation problems with very high dimensions, such as weather forecasting. However, the EnKF is most appropriate for additive Gaussian state-space models with linear observation equation and without unknown parameters. Read More


Lorentz Transmission Electron Microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector Field Electron Tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. Read More


A new recalibration post-processing method is presented to improve the quality of the posterior approximation when using Approximate Bayesian Computation (ABC) algorithms. Recalibration may be used in conjunction with existing post-processing methods, such as regression-adjustments. In addition, this work extends and strengthens the links between ABC and indirect inference algorithms, allowing more extensive use of misspecified auxiliary models in the ABC context. Read More


In the quest for scalable Bayesian computational algorithms we need to exploit the full potential of existing methodologies. In this note we point out that message passing algorithms, which are very well developed for inference in graphical models, appear to be largely unexplored for scalable inference in Bayesian multilevel regression models. We show that nested multilevel regression models with Gaussian errors lend themselves very naturally to the combined use of belief propagation and MCMC. Read More


Many real-world systems are profitably described as complex networks that grow over time. Preferential attachment and node fitness are two ubiquitous growth mechanisms that not only explain certain structural properties commonly observed in real-world systems, but are also tied to a number of applications in modeling and inference. While there are standard statistical packages for estimating the structural properties of complex networks, there is no corresponding package when it comes to the estimation of growth mechanisms. Read More


In this paper, we propose a new method for estimation and constructing confidence intervals for low-dimensional components in a high-dimensional model. The proposed estimator, called Constrained Lasso (CLasso) estimator, is obtained by simultaneously solving two estimating equations---one imposing a zero-bias constraint for the low-dimensional parameter and the other forming an $\ell_1$-penalized procedure for the high-dimensional nuisance parameter. By carefully choosing the zero-bias constraint, the resulting estimator of the low dimensional parameter is shown to admit an asymptotically normal limit attaining the Cram\'{e}r-Rao lower bound in a semiparametric sense. Read More


In this work we define log-linear models to compare several square contingency tables under the quasi-independence or the quasi-symmetry model, and the relevant Markov bases are theoretically characterized. Through Markov bases, an exact test to evaluate if two or more tables fit a common model is introduced. Two real-data examples illustrate the use of these models in different fields of applications. Read More


There has been great interest recently in applying nonparametric kernel mixtures in a hierarchical manner to model multiple related data samples jointly. In such settings several data features are commonly present: (i) the related samples often share some, if not all, of the mixture components but with differing weights, (ii) only some, not all, of the mixture components vary across the samples, and (iii) often the shared mixture components across samples are not aligned perfectly in terms of their location and spread, but rather display small misalignments either due to systematic cross-sample difference or more often due to uncontrolled, extraneous causes. Properly incorporating these features in mixture modeling will enhance the efficiency of inference, whereas ignoring them not only reduces efficiency but can jeopardize the validity of the inference due to issues such as confounding. Read More


Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. Read More


Big Data has become an ever more commonplace setting that is encountered by data analysts. In the Big Data setting, analysts are faced with very large numbers of observations as well as data that arrive as a stream, both of which are phenomena that many traditional statistical techniques are unable to contend with. Unfortunately, many of these traditional techniques are useful and cannot be discarded. Read More


Latent Dirichlet Allocation (LDA) is a topic model widely used in natural language processing and machine learning. Most approaches to training the model rely on iterative algorithms, which makes it difficult to run LDA on big data sets that are best analyzed in parallel and distributed computational environments. Indeed, current approaches to parallel inference either don't converge to the correct posterior or require storage of large dense matrices in memory. Read More


In this paper, we present a method for computing the marginal likelihood, also known as the model likelihood or Bayesian evidence, from Markov Chain Monte Carlo (MCMC), or other sampled posterior distributions. In order to do this, one needs to be able to estimate the density of points in parameter space, and this can be challenging in high numbers of dimensions. Here we present a Bayesian analysis, where we obtain the posterior for the marginal likelihood, using $k$th nearest-neighbour distances in parameter space, using the Mahalanobis distance metric, under the assumption that the points in the chain (thinned if required) are independent. Read More


We introduce dynamic nested sampling: a generalisation of the nested sampling algorithm in which the number of "live points" varies to allocate samples more efficiently. In empirical tests the new method increases accuracy by up to a factor of ~8 for parameter estimation and ~3 for evidence calculation compared to standard nested sampling with the same number of samples - equivalent to speeding up the computation by factors of ~64 and ~9 respectively. In addition unlike in standard nested sampling more accurate results can be obtained by continuing the calculation for longer. Read More


Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC) method for performing approximate inference in complex probabilistic models of continuous variables. In common with many MCMC methods, however, the standard HMC approach performs poorly in distributions with multiple isolated modes. We present a method for augmenting the Hamiltonian system with an extra continuous temperature control variable which allows the dynamic to bridge between sampling a complex target distribution and a simpler unimodal base distribution. Read More


We develop a Bayesian vector autoregressive (VAR) model that is capable of handling vast dimensional information sets. Three features are introduced to permit reliable estimation of the model. First, we assume that the reduced-form errors in the VAR feature a factor stochastic volatility structure, allowing for conditional equation-by-equation estimation. Read More


We propose a simple, generic algorithm for parallelising Markov chain Monte Carlo (MCMC) algorithms for posterior inference of Bayesian hierarchical models. Our algorithm parallelises evaluation of the product-form likelihoods formed when a parameter has many children in the hierarchical model; and parallelises sampling of conditionally-independent sets of parameters. A simple heuristic algorithm is used to decide which approach to use for each parameter and to apportion computation across computational cores. Read More


Fitting stochastic kinetic models represented by Markov jump processes within the Bayesian paradigm is complicated by the intractability of the observed data likelihood. There has therefore been considerable attention given to the design of pseudo-marginal Markov chain Monte Carlo algorithms for such models. However, these methods are typically computationally intensive, often require careful tuning and must be restarted from scratch upon receipt of new observations. Read More


Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in different works in the literature. Read More


2017Apr
Affiliations: 1Duke University, 2SAS Institute Inc, 3Duke University

Any empirical data can be approximated to one of Pearson distributions using the first four moments of the data (Elderton and Johnson, 1969; Pearson, 1895; Solomon and Stephens, 1978). Thus, Pearson distributions made statistical analysis possible for data with unknown distributions. There are both extant old-fashioned in-print tables (Pearson and Hartley, 1972) and contemporary computer programs (Amos and Daniel, 1971; Bouver and Bargmann, 1974; Bowman and Shenton, 1979; Davis and Stephens, 1983; Pan, 2009) available for obtaining percentage points of Pearson distributions corresponding to certain pre-specifed percentages (or probability values) (e. Read More


A method is proposed to generate an optimal fit of a number of connected linear trend segments onto time-series data. To be able to efficiently handle many lines, the method employs a stochastic search procedure to determine optimal transition point locations. Traditional methods use exhaustive grid searches, which severely limit the scale of the problems for which they can be utilized. Read More


Markov jump processes (MJPs) are continuous-time stochastic processes that find wide application in a variety of disciplines. Inference for MJPs typically proceeds via Markov chain Monte Carlo, the state-of-the-art being an auxiliary variable Gibbs sampler proposed recently. This algorithm was designed for the situation where the MJP parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating this into a larger Gibbs sampler. Read More


The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and regularization to get more stability. We compare stacking of predictive distributions to several alternatives: stacking of means, Bayesian model averaging (BMA), pseudo-BMA using AIC-type weighting, and a variant of pseudo-BMA that is stabilized using the Bayesian bootstrap. Read More


The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. Read More


In modern probabilistic learning we often wish to perform automatic inference for Bayesian models. However, informative prior distributions can be costly and difficult to elicit, and, as a consequence, flat priors are often chosen with the hope that they are reasonably uninformative. Objective priors such as the Jeffreys and reference priors are generally preferable over flat priors but are not tractable to derive for many models of interest. Read More


The iterated posterior linearization filter (IPLF) is an algorithm for Bayesian state estimation that performs the measurement update using iterative statistical regression. The main result behind IPLF is that the posterior approximation is more accurate when the statistical regression of measurement function is done in the posterior instead of the prior as is done in non-iterative Kalman filter extensions. In IPLF, each iteration in principle gives a better posterior estimate to obtain a better statistical regression and more accurate posterior estimate in the next iteration. Read More


Bayesian optimization (BO) is a global optimization strategy designed to find the minimum of expensive black-box functions $g$ typically defined on a continuous sets of $\mathcal{R}^d$. Using a Gaussian process (GP) as a surrogate model for the objective and an acquisition function to systematically search its domain, BO strategies aim to minimize the amount of samples required to find the minimum of $g$. Although currently available acquisition functions address this goal with different degree of success, an over-exploration effect of the contour of $g$ is typically observed. Read More


We formulate, and present a numerical method for solving, an inverse problem for inferring parameters of a deterministic model from stochastic observational data (quantities of interest). The solution, given as a probability measure, is derived using a Bayesian updating approach for measurable maps that finds a posterior probability measure, that when propagated through the deterministic model produces a push-forward measure that exactly matches the observed probability measure on the data. Our approach for finding such posterior measures, which we call consistent Bayesian inference, is simple and only requires the computation of the push-forward probability measure induced by the combination of a prior probability measure and the deterministic model. Read More


Misspecifications (i.e. errors on the parameters) of state space models lead to incorrect inference of the hidden states. Read More


Sequence analysis is being more and more widely used for the analysis of social sequences and other multivariate categorical time series data. However, it is often complex to describe, visualize, and compare large sequence data, especially when there are multiple parallel sequences per subject. Hidden (latent) Markov models (HMMs) are able to detect underlying latent structures and they can be used in various longitudinal settings: to account for measurement error, to detect unobservable states, or to compress information across several types of observations. Read More


Approximate Bayesian computation (ABC) is a method for Bayesian inference when the likelihood is unavailable but simulating from the model is possible. However, many ABC algorithms require a large number of simulations, which can be costly. To reduce the computational cost, surrogate models and Bayesian optimisation (BO) have been proposed. Read More


The calculation of minimum energy paths for transitions such as atomic and/or spin re-arrangements is an important task in many contexts and can often be used to determine the mechanism and rate of transitions. An important challenge is to reduce the computational effort in such calculations, especially when ab initio or electron density functional calculations are used to evaluate the energy since they can require large computational effort. Gaussian process regression is used here to reduce significantly the number of energy evaluations needed to find minimum energy paths of atomic rearrangements. Read More


Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) has proven to be a valuable tool. For instance, in case of model selection, this method relies on sampling a discrete model-indicator variable to estimate the posterior model probabilities. Read More


In this paper, we propose a Bayesian approach to obtain a sparse representation of the effect of a categorical predictor in regression type models. As the effect of a categorical predictor is captured by a group of level effects, sparsity cannot only be achieved by excluding single irrelevant level effects but also by excluding the whole group of effects associated to a predictor or by fusing levels which have essentially the same effect on the response. To achieve this goal, we propose a prior which allows for almost perfect as well as almost zero dependence between level effects a priori. Read More


We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP) based method to approximate the joint distribution of the unknown parameters and the data. In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. Read More


In this article, we present an orthogonal basis expansion method for solving stochastic differential equations with a path-independent solution of the form $X_{t}=\phi(t,W_{t})$. For this purpose, we define a Hilbert space and construct an orthogonal basis for this inner product space with the aid of 2D-Hermite polynomials. With considering $X_{t}$ as orthogonal basis expansion, this method is implemented and the expansion coefficients are obtained by solving a system of nonlinear integro-differential equations. Read More


We show that the generalized method of moments (GMM) estimation problem in instrumental variable quantile regression (IVQR) models can be equivalently formulated as a mixed integer quadratic programming problem. This enables exact computation of the GMM estimators for the IVQR models. We illustrate the usefulness of our algorithm via Monte Carlo experiments and an application to demand for fish. Read More


A flexible approach to modeling network data is based on exponential-family random graph models. We consider here exponential-family random graph models with additional structure in the form of local dependence, which have important conceptual and statistical advantages over models without additional structure. An open problem is how to estimate such models from large random graphs. Read More


Bayesian shrinkage methods have generated a lot of recent interest as tools for high-dimensional regression and model selection. These methods naturally facilitate tractable uncertainty quantification and incorporation of prior information. A common feature of these models, including the Bayesian lasso, global-local shrinkage priors, and spike-and-slab priors is that the corresponding priors on the regression coefficients can be expressed as scale mixture of normals. Read More


The CANDECOMP/PARAFAC (CP) tensor decomposition is a popular dimensionality-reduction method for multiway data. Dimensionality reduction is often sought since many high-dimensional tensors have low intrinsic rank relative to the dimension of the ambient measurement space. However, the emergence of `big data' poses significant computational challenges for computing this fundamental tensor decomposition. Read More


A novel numerical method for the estimation of large time-varying parameter (TVP) models is proposed. The Kalman filter and Kalman smoother estimates of the TVP model are derived within the context of generalised linear least squares and through the use of numerical linear algebra. The method developed is based on numerically stable and computationally efficient strategies. Read More


The package cleanNLP provides a set of fast tools for converting a textual corpus into a set of normalized tables. The underlying natural language processing pipeline utilizes Stanford's CoreNLP library, exposing a number of annotation tasks for text written in English, French, German, and Spanish. Annotators include tokenization, part of speech tagging, named entity recognition, entity linking, sentiment analysis, dependency parsing, coreference resolution, and information extraction. Read More


Clustering is the process of finding underlying group structures in data. Although model-based clustering is firmly established in the multivariate case, there is relative paucity for matrix variate distributions, and there are even fewer examples using matrix variate skew distributions. In this paper, we look at parameter estimation for a finite mixture of matrix variate skew-t distributions in the context of model-based clustering. Read More


Growth mixture models (GMMs) incorporate both conventional random effects growth modeling and latent trajectory classes as in finite mixture modeling; therefore, they offer a way to handle the unobserved heterogeneity between subjects in their development. GMMs with Gaussian random effects dominate the literature. When the data are asymmetric and/or have heavier tails, more than one latent class is required to capture the observed variable distribution. Read More


2017Mar
Affiliations: 1Department of Statistical Sciences, Sapienza University of Rome, Italy, 2Department of Statistical Sciences, Sapienza University of Rome, Italy

When a Genetic Algorithm (GA), or a stochastic algorithm in general, is employed in a statistical problem, the obtained result is affected by both variability due to sampling, that refers to the fact that only a sample is observed, and variability due to the stochastic elements of the algorithm. This topic can be easily set in a framework of statistical and computational tradeoff question, crucial in recent problems, for which statisticians must carefully set statistical and computational part of the analysis, taking account of some resource or time constraints. In the present work we analyze estimation problems tackled by GAs, for which variability of estimates can be decomposed in the two sources of variability, considering some constraints in the form of cost functions, related to both data acquisition and runtime of the algorithm. Read More