# Antonio Rosa - MIT

## Contact Details

NameAntonio Rosa |
||

AffiliationMIT |
||

Location |
||

## Pubs By Year |
||

## External Links |
||

## Pub CategoriesComputer Science - Distributed; Parallel; and Cluster Computing (10) Mathematics - Analysis of PDEs (7) Computer Science - Databases (6) Instrumentation and Methods for Astrophysics (2) Computer Science - Computational Engineering; Finance; and Science (1) Computer Science - Mathematical Software (1) Computer Science - Performance (1) Mathematics - Optimization and Control (1) Quantitative Biology - Quantitative Methods (1) Computer Science - Cryptography and Security (1) |

## Publications Authored By Antonio Rosa

In the rapidly expanding field of parallel processing, job schedulers are the "operating systems" of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. Read More

We prove a compactness principle for the anisotropic formulation of the Plateau problem in any codimension, in the same spirit of the previous works of the authors \cite{DelGhiMag,DePDeRGhi,DeLDeRGhi16}. In particular, we perform a new strategy for the proof of the rectifiability of the minimal set, based on the new anisotropic counterpart of the Allard rectifiability theorem proved by the authors in \cite{DePDeRGhi2}. As a consequence we provide a new proof of Reifenberg existence theorem. Read More

In this note we prove an explicit formula for the lower semicontinuous envelope of some functionals defined on real polyhedral chains. More precisely, denoting by $H \colon \mathbb{R} \to \left[ 0,\infty \right)$ an even, subadditive, and lower semicontinuous function with $H(0)=0$, and by $\Phi_H$ the functional induced by $H$ on polyhedral $m$-chains, namely \[ \Phi_{H}(P) := \sum_{i=1}^{N} H(\theta_{i}) \mathcal{H}^{m}(\sigma_{i}), \quad\mbox{for every }P=\sum_{i=1}^{N} \theta_{i} [[ \sigma_{i} ]] \in\mathbf{P}_m(\mathbb{R}^n), \] we prove that the lower semicontinuous envelope of $\Phi_H$ coincides on rectifiable $m$-currents with the $H$-mass \[ \mathbb{M}_{H}(R) := \int_E H(\theta(x)) \, d\mathcal{H}^m(x) \quad \mbox{ for every } R= [[ E,\tau,\theta ]] \in \mathbf{R}_{m}(\mathbb{R}^{n}). \] Read More

Models involving branched structures are employed to describe several supply-demand systems such as the structure of the nerves of a leaf, the system of roots of a tree and the nervous or cardiovascular systems. Given a flow (traffic path) that transports a given measure $\mu^-$ onto a target measure $\mu^+$, along a 1-dimensional network, the transportation cost per unit length is supposed in these models to be proportional to a concave power $\alpha \in (0,1)$ of the intensity of the flow. In this paper we address an open problem in the book "Optimal transportation networks" by Bernot, Caselles and Morel and we improve the stability for optimal traffic paths in the Euclidean space $\mathbb{R}^d$, with respect to variations of the given measures $(\mu^-,\mu^+)$, which was known up to now only for $\alpha>1-\frac1d$. Read More

We consider the minimization problem of an anisotropic energy in classes of $d$-rectifiable varifolds in $\mathbb R^n$, closed under Lipschitz deformations and encoding a suitable notion of boundary. We prove that any minimizing sequence with density uniformly bounded from below converges (up to subsequences) to a $d$-rectifiable varifold. Moreover, the limiting varifold is integral, provided the minimizing sequence is made of integral varifolds with uniformly locally bounded anisotropic first variation. Read More

SciDB is a scalable, computational database management system that uses an array model for data storage. The array data model of SciDB makes it ideally suited for storing and managing large amounts of imaging data. SciDB is designed to support advanced analytics in database, thus reducing the need for extracting data for analysis. Read More

We extend Allard's celebrated rectifiability theorem to the setting of varifolds with locally bounded first variation with respect to an anisotropic integrand. In particular, we identify a sufficient and necessary condition on the integrand to obtain the rectifiability of every \(d\)-dimensional varifold with locally bounded first variation and positive \(d\)-dimensional density. In codimension one, this condition is shown to be equivalent to the strict convexity of the integrand with respect to the tangent plane. Read More

The map-reduce parallel programming model has become extremely popular in the big data community. Many big data workloads can benefit from the enhanced performance offered by supercomputers. LLMapReduce provides the familiar map-reduce parallel programming model to big data users running on a supercomputer. Read More

Job schedulers are a key component of scalable computing infrastructures. They orchestrate all of the work executed on the computing infrastructure and directly impact the effectiveness of the system. Recently, job workloads have diversified from long-running, synchronously-parallel simulations to include short-duration, independently parallel high performance data analysis (HPDA) jobs. Read More

HPC systems traditionally allow their users unrestricted use of their internal network. While this network is normally controlled enough to guarantee privacy without the need for encryption, it does not provide a method to authenticate peer connections. Protocols built upon this internal network must provide their own authentication. Read More

We prove a compactness principle for the anisotropic formulation of the Plateau problem in codimension one, along the same lines of previous works of the authors [DGM14, DPDRG15]. In particular, we perform a new strategy for proving the rectifiability of the minimal set, avoiding the Preiss' Rectifiability Theorem [Pre87]. Read More

The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Numerous tools exist that allow users to store, query and index these massive quantities of data. Read More

Data processing systems impose multiple views on data as it is processed by the system. These views include spreadsheets, databases, matrices, and graphs. There are a wide variety of technologies that can be used to store and process data through these different steps. Read More

High Performance Computing (HPC) is intrinsically linked to effective Data Center Infrastructure Management (DCIM). Cloud services and HPC have become key components in Department of Defense and corporate Information Technology competitive strategies in the global and commercial spaces. As a result, the reliance on consistent, reliable Data Center space is more critical than ever. Read More

The MIT SuperCloud database management system allows for rapid creation and flexible execution of a variety of the latest scientific databases, including Apache Accumulo and SciDB. It is designed to permit these databases to run on a High Performance Computing Cluster (HPCC) platform as seamlessly as any other HPCC job. It ensures the seamless migration of the databases to the resources assigned by the HPCC scheduler and centralized storage of the database files when not running. Read More

This paper aims to propose a direct approach to solve the Plateau's problem in codimension higher than one. The problem is formulated as the minimization of the Hausdorff measure among a family of $d$-rectifiable closed subsets of $\mathbb R^n$: following the previous work \cite{DelGhiMag} the existence result is obtained by a compactness principle valid under fairly general assumptions on the class of competitors. Such class is then specified to give meaning to boundary conditions. Read More

**Authors:**Jeremy Kepner

^{1}, Christian Anderson

^{2}, William Arcand

^{3}, David Bestor

^{4}, Bill Bergeron

^{5}, Chansup Byun

^{6}, Matthew Hubbell

^{7}, Peter Michaleas

^{8}, Julie Mullen

^{9}, David O'Gwynn

^{10}, Andrew Prout

^{11}, Albert Reuther

^{12}, Antonio Rosa

^{13}, Charles Yee

^{14}

**Affiliations:**

^{1}MIT,

^{2}MIT,

^{3}MIT,

^{4}MIT,

^{5}MIT,

^{6}MIT,

^{7}MIT,

^{8}MIT,

^{9}MIT,

^{10}MIT,

^{11}MIT,

^{12}MIT,

^{13}MIT,

^{14}MIT

Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). Read More

**Authors:**Jeremy Kepner

^{1}, William Arcand

^{2}, David Bestor

^{3}, Bill Bergeron

^{4}, Chansup Byun

^{5}, Vijay Gadepally

^{6}, Matthew Hubbell

^{7}, Peter Michaleas

^{8}, Julie Mullen

^{9}, Andrew Prout

^{10}, Albert Reuther

^{11}, Antonio Rosa

^{12}, Charles Yee

^{13}

**Affiliations:**

^{1}MIT,

^{2}MIT,

^{3}MIT,

^{4}MIT,

^{5}MIT,

^{6}MIT,

^{7}MIT,

^{8}MIT,

^{9}MIT,

^{10}MIT,

^{11}MIT,

^{12}MIT,

^{13}MIT

The Apache Accumulo database is an open source relaxed consistency database that is widely used for government applications. Accumulo is designed to deliver high performance on unstructured data such as graphs of network data. This paper tests the performance of Accumulo using data from the Graph500 benchmark. Read More