Computer Science - Numerical Analysis Publications (50)

Search

Computer Science - Numerical Analysis Publications

This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Read More


On modern large-scale parallel computers, the performance of Krylov subspace iterative methods is limited by global synchronization. This has inspired the development of $s$-step Krylov subspace method variants, in which iterations are computed in blocks of $s$, which can reduce the number of global synchronizations per iteration by a factor of $O(s)$. Although the $s$-step variants are mathematically equivalent to their classical counterparts, they can behave quite differently in finite precision depending on the parameter $s$. Read More


This paper introduces multivariate input-output models to predict the errors and bases dimensions of local parametric Proper Orthogonal Decomposition reduced-order models. We refer to these multivariate mappings as the MP-LROM models. We employ Gaussian Processes and Artificial Neural Networks to construct approximations of these multivariate mappings. Read More


A fully (pseudo-)spectral solver for direct numerical simulations of large-scale turbulent channel flows is described. The solver utilizes the Chebyshev base functions suggested by J. Shen [SIAM J. Read More


In this work, we propose two-level space-time domain decomposition preconditioners for parabolic problems discretized using finite elements. They are motivated as an extension to space-time of balancing domain decomposition by constraints preconditioners. The key ingredients to be defined are the sub-assembled space and operator, the coarse degrees of freedom (DOFs) in which we want to enforce continuity among subdomains at the preconditioner level, and the transfer operator from the sub-assembled to the original finite element space. Read More


This work proposes a machine-learning-based framework for estimating the error introduced by surrogate models of parameterized dynamical systems. The framework applies high-dimensional regression techniques (e.g. Read More


Kernel matrices appear in machine learning and non-parametric statistics. Given $N$ points in $d$ dimensions and a kernel function that requires $\mathcal{O}(d)$ work to evaluate, we present an $\mathcal{O}(dN\log N)$-work algorithm for the approximate factorization of a regularized kernel matrix, a common computational bottleneck in the training phase of a learning task. With this factorization, solving a linear system with a kernel matrix can be done with $\mathcal{O}(N\log N)$ work. Read More


Differential (Ore) type polynomials with "approximate" polynomial coefficients are introduced. These provide an effective notion of approximate differential operators, with a strong algebraic structure. We introduce the approximate Greatest Common Right Divisor Problem (GCRD) of differential polynomials, as a non-commutative generalization of the well-studied approximate GCD problem. Read More


Design of filters for graph signal processing benefits from knowledge of the spectral decomposition of matrices that encode graphs, such as the adjacency matrix and the Laplacian matrix, used to define the shift operator. For shift matrices with real eigenvalues, which arise for symmetric graphs, the empirical spectral distribution captures the eigenvalue locations. Under realistic circumstances, stochastic influences often affect the network structure and, consequently, the shift matrix empirical spectral distribution. Read More


The paper derives and analyses the (semi-)discrete dispersion relation of the Parareal parallel-in-time integration method. It investigates Parareal's wave propagation characteristics with the aim to better understand what causes the well documented stability problems for hyperbolic equations. The analysis shows that the instability is caused by convergence of the amplification factor to the exact value from above for medium to high wave numbers. Read More


The paper covers a formulation of the inverse quadratic programming problem in terms of unconstrained optimization where it is required to find the unknown parameters (the matrix of the quadratic form and the vector of the quasi-linear part of the quadratic form) provided that approximate estimates of the optimal solution of the direct problem and those of the target function to be minimized in the form of pairs of values lying in the corresponding neighborhoods are only known. The formulation of the inverse problem and its solution are based on the least squares method. In the explicit form the inverse problem solution has been derived in the form a system of linear equations. Read More


When a measurement falls outside the quantization or measurable range, it becomes saturated and cannot be used in classical reconstruction methods. For example, in C-arm angiography systems, which provide projection radiography, fluoroscopy, digital subtraction angiography, and are widely used for medical diagnoses and interventions, the limited dynamic range of C-arm flat detectors leads to overexposure in some projections during an acquisition, such as imaging relatively thin body parts (e.g. Read More


We analyzed the performance of a biologically inspired algorithm called the Corrected Projections Algorithm (CPA) when a sparseness constraint is required to unambiguously reconstruct an observed signal using atoms from an overcomplete dictionary. By changing the geometry of the estimation problem, CPA gives an analytical expression for a binary variable that indicates the presence or absence of a dictionary atom using an L2 regularizer. The regularized solution can be implemented using an efficient real-time Kalman-filter type of algorithm. Read More


This thesis examines a modern concept for machine numbers based on interval arithmetic called 'Unums' and compares it to IEEE 754 floating-point arithmetic, evaluating possible uses of this format where floating-point numbers are inadequate. In the course of this examination, this thesis builds theoretical foundations for IEEE 754 floating-point numbers, interval arithmetic based on the projectively extended real numbers and Unums. Read More


This report describes the computation of gradients by algorithmic differentiation for statistically optimum beamforming operations. Especially the derivation of complex-valued functions is a key component of this approach. Therefore the real-valued algorithmic differentiation is extended via the complex-valued chain rule. Read More


Non-negative matrix factorization (NMF) is a prob- lem with many applications, ranging from facial recognition to document clustering. However, due to the variety of algorithms that solve NMF, the randomness involved in these algorithms, and the somewhat subjective nature of the problem, there is no clear "correct answer" to any particular NMF problem, and as a result, it can be hard to test new algorithms. This paper suggests some test cases for NMF algorithms derived from matrices with enumerable exact non-negative factorizations and perturbations of these matrices. Read More


This paper introduces optimally-blended quadrature rules for isogeometric analysis and analyzes the numerical dispersion of the resulting discretizations. To quantify the approximation errors when we modify the inner products, we generalize the Pythagorean eigenvalue theorem of Strang and Fix. The proposed blended quadrature rules have advantages over alternative integration rules for isogeometric analysis on uniform and non-uniform meshes as well as for different polynomial orders and continuity of the basis. Read More


As illustrated via numerical experiments with an implementation in Spark (the popular platform for distributed computation), randomized algorithms solve two ubiquitous problems: (1) calculating a full principal component analysis or singular value decomposition of a highly rectangular matrix, and (2) calculating a low-rank approximation in the form of a singular value decomposition to an arbitrary matrix. Several optimizations to recently introduced methods yield results that are uniformly superior to those of the stock implementations. Read More


This work is devoted to the design of interior penalty discontinuous Galerkin (dG) schemes that preserve maximum principles at the discrete level for the steady transport and convection-diffusion problems and the respective transient problems with implicit time integration. Monotonic schemes that combine explicit time stepping with dG space discretization are very common, but the design of such schemes for implicit time stepping is rare, and it had only been attained so far for 1D problems. The proposed scheme is based on an artificial diffusion that linearly depends on a shock detector that identifies the troublesome areas. Read More


Owing to their low-complexity iterations, Frank-Wolfe (FW) solvers are well suited for various large-scale learning tasks. When block-separable constraints are also present, randomized FW has been shown to further reduce complexity by updating only a fraction of coordinate blocks per iteration. In this context, the present work develops feasibility-ensuring step sizes, and provably convergent randomized block Frank-Wolfe (RB-FW) solvers that are flexible in selecting the number of blocks to update per iteration. Read More


This work develops a parallelized algorithm to compute the dynamic mode decomposition (DMD) on a graphics processing unit using the streaming method of snapshots singular value decomposition. This allows the algorithm to operate efficiently on streaming data by avoiding redundant inner-products as new data becomes available. In addition, it is possible to leverage the native compressed format of many data streams, such as HD video and computational physics codes that are represented sparsely in the Fourier domain, to massively reduce data transfer from CPU to GPU and to enable sparse matrix multiplications. Read More


The Kaczmarz method is an iterative algorithm for solving systems of linear equalities and inequalities, that iteratively projects onto these constraints. Recently, Strohmer and Vershynin [J. Fourier Anal. Read More


Laguerre and Laguerre-type polynomials are orthogonal polynomials on the interval $[0,\infty)$ with respect to a weight function of the form $w(x) = x^{\alpha} e^{-Q(x)}, Q(x) = \sum_{k=0}^m q_k x^k, \alpha > -1, q_m > 0$. The classical Laguerre polynomials correspond to $Q(x)=x$. The computation of higher-order terms of the asymptotic expansions of these polynomials for large degree becomes quite complicated, and a full description seems to be lacking in literature. Read More


In this paper, an efficient divide-and-conquer (DC) algorithm is proposed for the symmetric tridiagonal matrices based on ScaLAPACK and the hierarchically semiseparable (HSS) matrices. HSS is an important type of rank-structured matrices.Most time of the DC algorithm is cost by computing the eigenvectors via the matrix-matrix multiplications (MMM). Read More


We present a polynomial multigrid method for the nodal interior penalty formulation of the Poisson equation on three-dimensional Cartesian grids. Its key ingredient is a weighted overlapping Schwarz smoother operating on element-centered subdomains. The MG method reaches superior convergence rates corresponding to residual reductions of about two orders of magnitude within a single V(1,1) cycle. Read More


A stochastic iterative algorithm approximating second-order information using von Neumann series is discussed. We present an improved and simplified convergence analysis, in contrast to a similar algorithm and its analysis, LISSA. The algorithm is primarily suitable for training large scale linear models, where the number of data points is very large. Read More


The Fast Fourier Transform (FFT) is an algorithm of paramount importance in signal processing as it allows to apply the Fourier transform in O(n log n) instead of O(n 2) arithmetic operations. Graph Signal Processing (GSP) is a recent research domain that generalizes classical signal processing tools, such as the Fourier transform, to situations where the signal domain is given by any arbitrary graph instead of a regular grid. Today, there is no method to rapidly apply graph Fourier transforms. Read More


Functions of one or more variables are usually approximated with a basis; a complete, linearly independent set of functions that spans an appropriate function space. The topic of this paper is the numerical approximation of functions using the more general notion of frames; complete systems that are generally redundant but provide stable infinite representations. While frames are well-known in image and signal processing, coding theory and other areas of mathematics, their use in numerical analysis is far less widespread. Read More


Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. Read More


Computational techniques are extensively applied in nonlinear science. However, while the use of computers for research has been expressive, the evaluation of numerical results does not grow in the same pace. Hammel et al. Read More


We investigate the fundamental conditions on the sampling pattern, i.e., locations of the sampled entries, for finite completability of a low-rank tensor given some components of its Tucker rank. Read More


Iterative methods on irregular grids have been used widely in all areas of comptational science and engineering for solving partial differential equations with complex geometry. They provide the flexibility to express complex shapes with relatively low computational cost. However, the direction of the evolution of high-performance processors in the last two decades have caused serious degradation of the computational efficiency of iterative methods on irregular grids, because of relatively low memory bandwidth. Read More


In this paper, synchronization of identical switched chaotic systems is explored based on Lyapunov theory of guaranteed stability. Concepts from robust control principles and switched linear systems are merged together to derive a sufficient condition for synchronization of identical master-slave switched nonlinear chaotic systems and are expressed in the form of bilinear matrix inequalities (BMIs). The nonlinear controller design problem is then recast in the form of linear matrix inequalities (LMIs) to facilitate numerical computation by standard LMI solvers and is illustrated by appropriate examples. Read More


Non-uniform fast Fourier Transform (NUFFT) and inverse NUFFT (INUFFT) algorithms, based on the Fast Multipole Method (FMM) are developed and tested. Our algorithms are based on a novel factorization of the FFT kernel, and are implemented with attention to data structures and error analysis. Note: This unpublished manuscript was available on our web pages and has been referred to by others in the literature. Read More


Predictive simulations are crucial for the success of many subsurface applications, and it is highly desirable to obtain accurate non-negative solutions for transport equations in these numerical simulations. In this paper, we propose a computational framework based on the variational inequality (VI) which can also be used to enforce important mathematical properties (e.g. Read More


This manuscript contains some thoughts on the discretization of the classical heat equation. Namely, we discuss the advantages and disadvantages of explicit and implicit schemes. Then, we show how to overcome some disadvantages while preserving some advantages. Read More


Correlation clustering is a technique for aggregating data based on qualitative information about which pairs of objects are labeled 'similar' or 'dissimilar.' Because the optimization problem is NP-hard, much of the previous literature focuses on finding approximation algorithms. In this paper we explore a new approach to correlation clustering by considering how to solve the problem when the data to be clustered can be represented by a low-rank matrix. Read More


We consider algorithms for going from a "full" matrix to a condensed "band bidiagonal" form using orthogonal transformations. We use the framework of "algorithms by tiles". Within this framework, we study: (i) the tiled bidiagonalization algorithm BiDiag, which is a tiled version of the standard scalar bidiagonalization algorithm; and (ii) the R-bidiagonalization algorithm R-BiDiag, which is a tiled version of the algorithm which consists in first performing the QR factorization of the initial matrix, then performing the band-bidiagonalization of the R-factor. Read More


Conventional sampling techniques fall short of drawing descriptive sketches of the data when the data is grossly corrupted as such corruptions break the low rank structure required for them to perform satisfactorily. In this paper, we present new sampling algorithms which can locate the informative columns in presence of severe data corruptions. In addition, we develop new scalable randomized designs of the proposed algorithms. Read More


The t-SVD based Tensor Robust Principal Component Analysis (TRPCA) decomposes low rank multi-linear signal corrupted by gross errors into low multi-rank and sparse component by simultaneously minimizing tensor nuclear norm and l 1 norm. But if the multi-rank of the signal is considerably large and/or large amount of noise is present, the performance of TRPCA deteriorates. To overcome this problem, this paper proposes a new efficient iterative reweighted tensor decomposition scheme based on t-SVD which significantly improves tensor multi-rank in TRPCA. Read More


In this paper we consider various splitting schemes for unsteady problems containing the grad-div operator. The fully implicit discretization of such problems would yield at each time step a linear problem that couples all components of the solution vector. In this paper we discuss various possibilities to decouple the equations for the different components that result in unconditionally stable schemes. Read More


In this paper, a goal-oriented error estimation technique for static response sensitivity analysis is proposed based on the constitutive relation error (CRE) estimation for finite element analysis (FEA). Strict upper and lower bounds of various quantities of interest (QoI) associated with the response sensitivity derivative fields are acquired. Numerical results are presented to validate the strict bounding properties of the proposed technique. Read More


Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM and SVRG-ADMM,but the convergence is still suboptimal w. Read More


Riemannian optimization methods have shown to be both fast and accurate in recovering a large-scale tensor from its incomplete observation. However, in almost all recent Riemannian tensor completion methods, only low rank constraint is considered. Another important fact, side information or features, remains far from exploiting within the Riemannian optimization framework. Read More


The technique of matrix sketching, such as the use of random projections, has been shown in recent years to be a powerful tool for accelerating many important statistical learning techniques. Research has so far focused largely on using sketching for the "vanilla" un-regularized versions of these techniques. Here we study sketching methods for regularized variants of linear regression, low rank approximations, and canonical correlation analysis. Read More


Kernel Ridge Regression (KRR) is a simple yet powerful technique for non-parametric regression whose computation amounts to solving a linear system. This system is usually dense and highly ill-conditioned. In addition, the dimensions of the matrix are the same as the number of data points, so direct methods are unrealistic for large-scale datasets. Read More


Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ($P^3$) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. Read More


Arb is a C library for arbitrary-precision interval arithmetic using the midpoint-radius representation, also known as ball arithmetic. It supports real and complex numbers, polynomials, power series, matrices, and evaluation of many special functions. The core number types are designed for versatility and speed in a range of scenarios, allowing performance that is competitive with non-interval arbitrary-precision types such as MPFR and MPC floating-point numbers. Read More


This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Read More


A longstanding problem related to floating-point implementation of numerical programs is to provide efficient yet precise analysis of output errors. We present a framework to compute lower bounds of absolute roundoff errors for numerical programs implementing polynomial functions with box constrained input variables. Our study relies on semidefinite programming (SDP) relaxations and is complementary of over-approximation frameworks, consisting of obtaining upper bounds for the absolute roundoff error. Read More