Jeffrey A. Fessler

Jeffrey A. Fessler
Are you Jeffrey A. Fessler?

Claim your profile, edit publications, add additional information:

Contact Details

Name
Jeffrey A. Fessler
Affiliation
Location

Pubs By Year

Pub Categories

 
Mathematics - Optimization and Control (10)
 
Statistics - Machine Learning (6)
 
Computer Science - Learning (6)
 
Statistics - Theory (2)
 
Mathematics - Statistics (2)
 
Quantum Physics (1)
 
Statistics - Computation (1)
 
Statistics - Applications (1)

Publications Authored By Jeffrey A. Fessler

The development of computed tomography (CT) image reconstruction methods that significantly reduce patient radiation exposure while maintaining high image quality is an important area of research in low-dose CT (LDCT) imaging. We propose a new penalized weighted least squares (PWLS) reconstruction method that exploits regularization based on an efficient Union of Learned TRAnsforms (PWLS-ULTRA). The union of square transforms is pre-learned from numerous image patches extracted from a dataset of CT images or volumes. Read More

Principal Component Analysis (PCA) is a classical method for reducing the dimensionality of data by projecting them onto a subspace that captures most of their variation. Effective use of PCA in modern applications requires understanding its performance for data that are both high-dimensional (i.e. Read More

First-order methods with momentum such as Nesterov's fast gradient method (FGM) are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence for some applications. An adaptive restarting scheme can improve the convergence rate of FGM, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally well-conditioned region. Recently, we introduced an optimized gradient method (OGM), a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of FGM, yet has a worst-case cost function convergence bound that is twice smaller than that of FGM and that is optimal for large-dimensional smooth convex problems. Read More

Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. Read More

Principal Component Analysis (PCA) is a method for estimating a subspace given noisy samples. It is useful in a variety of problems ranging from dimensionality reduction to anomaly detection and the visualization of high dimensional data. PCA performs well in the presence of moderate noise and even with missing data, but is also sensitive to outliers. Read More

We consider minimizing the composite function that consists of a strongly convex function and a convex function. The fast dual proximal gradient (FDPG) method decreases the dual function with a rate $O(1/k^2)$, leading to a rate $O(1/k)$ for decreasing the primal function. We propose a generalized FDPG method that guarantees an $O(1/k^{1. Read More

The "Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)", also known as a fast proximal gradient method (FPGM) in general, is widely used for efficiently minimizing composite convex functions with a nonsmooth term such as the l1 regularizer. This paper first provides an alternative way of developing FISTA (FPGM). Specifically, this paper shows that FISTA (FPGM) corresponds to an optimized approach to accelerating the proximal gradient method (PGM) with respect to the rate of decrease of the cost function. Read More

The optimized gradient method (OGM) was recently developed by optimizing the step coefficients of first-order methods with respect to the function value. This OGM has a per-iteration computational cost that is similar to Nesterov's fast gradient method (FGM), and satisfies a worst-case convergence bound of the function value that is twice smaller than that of FGM. Moreover, OGM was shown recently to achieve the optimal cost function complexity bound of first-order methods (with either fixed or dynamic step sizes) for smooth convex minimization. Read More

Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Read More

The sparsity of natural signals and images in a transform domain or dictionary has been extensively exploited in several applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise in many applications compared to fixed or analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. Read More

The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. Read More

This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed an optimized gradient method (OGM) for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method (FGM), yet has a similarly efficient practical implementation. Drori showed recently that OGM has optimal complexity over the general class of first-order methods. Read More

Iterative majorize-minimize (MM) (also called optimization transfer) algorithms solve challenging numerical optimization problems by solving a series of "easier" optimization problems that are constructed to guarantee monotonic descent of the cost function. Many MM algorithms replace a computationally expensive Hessian matrix with another more computationally convenient majorizing matrix. These majorizing matrices are often generated using various matrix inequalities, and consequently the set of available majorizers is limited to structures for which these matrix inequalities can be efficiently applied. Read More

We propose a general framework for reconstructing transform-sparse images from undersampled (squared)-magnitude data corrupted with outliers. This framework is implemented using a multi-layered approach, combining multiple initializations (to address the nonconvexity of the phase retrieval problem), repeated minimization of a convex majorizer (surrogate for a nonconvex objective function), and iterative optimization using the alternating directions method of multipliers. Exploiting the generality of this framework, we investigate using a Laplace measurement noise model better adapted to outliers present in the data than the conventional Gaussian noise model. Read More

We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle recently described a numerical method for computing the $N$-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods, and Nesterov's fast gradient methods. However, Drori and Teboulle's numerical method is computationally expensive for large $N$, and the corresponding numerically optimized first-order algorithm requires impractical memory and computation for large-scale optimization problems. Read More

The augmented Lagrangian (AL) method that solves convex optimization problems with linear constraints has drawn more attention recently in imaging applications due to its decomposable structure for composite cost functions and empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction and large-scale sparse regression with "big data", where there is no efficient way to solve the inner least-squares problem, the AL method can be slow due to the inevitable iterative inner updates. In this paper, we focus on solving regularized (weighted) least-squares problems using a linearized variant of the AL method that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate (SQS) function, thus leading to a much simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM, for X-ray CT image reconstruction. Read More

In single spin Magnetic Resonance Force Microscopy (MRFM), the objective is to detect the presence of an electron (or nuclear) spin in a sample volume by measuring spin-induced attonewton forces using a micromachined cantilever. In the OSCAR method of single spin MRFM, the spins are manipulated by an external rf field to produce small periodic deviations in the resonant frequency of the cantilever. These deviations can be detected by frequency demodulation followed by conventional amplitude or energy detection. Read More