Big Data in HEP: A comprehensive use case study

2017Mar
Affiliations: 1Fermi National Accelerator Laboratory, 2Fermi National Accelerator Laboratory, 3Princeton University, 4Fermi National Accelerator Laboratory, 5Fermi National Accelerator Laboratory, 6Princeton University, 7Fermi National Accelerator Laboratory, 8Fermi National Accelerator Laboratory now Johns Hopkins University, 9Princeton University, 10Fermi National Accelerator Laboratory

Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.

Comments: Proceedings for 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016)

Similar Publications

Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. Read More


Primitive partitioning strategies for streaming applications operate efficiently under two very strict assumptions: the resources are homogeneous and the messages are drawn from a uniform key distribution. These assumptions are often not true for the real-world use cases. Dealing with heterogeneity and non-uniform workload requires inferring the resource capacities and input distribution at run time. Read More


Triangle-free graphs play a central role in graph theory, and triangle detection (or triangle finding) as well as triangle enumeration (triangle listing) play central roles in the field of graph algorithms. In distributed computing, algorithms with sublinear round complexity for triangle finding and listing have recently been developed in the powerful CONGEST clique model, where communication is allowed between any two nodes of the network. In this paper we present the first algorithms with sublinear complexity for triangle finding and triangle listing in the standard CONGEST model, where the communication topology is the same as the topology of the network. Read More


Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. Read More


In this work, we provide a general framework for adding a linearizable iterator to data structures with set operations. We propose a condition on these set operations, called locality, so that any data structure implemented from local atomic operations can be augmented with a linearizable iterator as described by our framework. We then apply the iterator framework to various data structures, prove locality of their operations, and demonstrate that the iterator framework does not significantly affect the performance of concurrent operations. Read More


Even in the absence of clocks, time bounds on the duration of actions enable the use of time for distributed coordination. This paper initiates an investigation of coordination in such a setting. A new communication structure called a zigzag pattern is introduced, and shown to guarantee bounds on the relative timing of events in this clockless model. Read More


Consider a set of agents in a peer-to-peer communication network, where each agent has a personal dataset and a personal learning objective. The main question addressed in this paper is: how can agents collaborate to improve upon their locally learned model without leaking sensitive information about their data? Our first contribution is to reformulate this problem so that it can be solved by a block coordinate descent algorithm. We obtain an efficient and fully decentralized protocol working in an asynchronous fashion. Read More


The Internet of Mobile Things encompasses stream data being generated by sensors, network communications that pull and push these data streams, as well as running processing and analytics that can effectively leverage actionable information for planning, management, and business advantage. Edge computing emerges as a new paradigm that decentralizes the communication, computation, control and storage resources from the cloud to the edge of the Internet. This paper proposes an edge computing platform where mobile fog nodes are physical devices where descriptive analytics is deployed to analyze real-time transit data streams. Read More


We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known $O(Q+S \cdot (M/B))$ caching cost for the randomized work stealing (RWS) scheduler, where $S$ is the number of steals, $Q$ is the sequential caching cost, and $M$ and $B$ are the cache size and block (or cache line) size respectively. Read More


A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. Read More