# Richard Zhang

## Contact Details

NameRichard Zhang |
||

Affiliation |
||

Location |
||

## Pubs By Year |
||

## Pub CategoriesMathematics - Optimization and Control (3) Computer Science - Computer Vision and Pattern Recognition (3) Mathematics - Numerical Analysis (2) Computer Science - Graphics (1) |

## Publications Authored By Richard Zhang

We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. Read More

Semidefinite programs (SDPs) are powerful theoretical tools that have been studied for over two decades, but their practical use remains limited due to computational difficulties in solving large-scale, realistic-sized problems. In this paper, we describe a modified interior-point method for the efficient solution of large-and-sparse low-rank SDPs, which finds applications in graph theory, approximation theory, control theory, sum-of-squares etc. Given that the problem data is large-and-sparse, conjugate gradients (CG) can be used to avoid forming, storing, and factoring the large and fully-dense interior-point Hessian matrix, but the resulting convergence rate is usually slow due to ill-conditioning. Read More

We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Read More

Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. Read More

We consider the solution of linear saddle-point problems, using the alternating direction method-of-multipliers (ADMM) as a preconditioner for the generalized minimum residual method (GMRES). We show, using theoretical bounds and empirical results, that ADMM is made remarkably insensitive to the parameter choice with Krylov subspace acceleration. We prove that ADMM-GMRES can consistently converge, irrespective of the exact parameter choice, to an $\epsilon$-accurate solution of a $\kappa$-conditioned problem in $O(\kappa^{2/3}\log\epsilon^{-1})$ iterations. Read More

We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a $\kappa$-conditioned problem in $O(\sqrt{\kappa})$ iterations. Read More