Tuesday, November 20, 2018

Parallel Application Capacity Paper at Supercomputing 2018

Nate Kremer-Herman presented the paper A Lightweight Model for Right-Sizing Master-Worker Applications at the ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (Supercomputing) on November 14, 2018 in Dallas, Texas. This year marked the 30th anniversary of the Supercomputing conference.


In A Lightweight Model for Right-Sizing Master-Worker Applications, we note that when running a parallel application at scale, a resource provisioning policy should minimize over-commitment (idle resources) and under-commitment (resource contention). However, users seldom know the quantity of resources to appropriately execute their application. Even with such knowledge, over- and under-commitment of resources may still occur because the application does not run in isolation. It shares resources such as network and filesystems.We formally define the capacity of a parallel application as the quantity of resources that may effectively be provisioned for the best execution time in an environment. We present a model to compute an estimate of the capacity of master-worker applications as they run based on execution and data-transfer times.

Although the model for provisioning these applications is important, a key insight from the paper comes from a diagram which demonstrates how a parallel application's scale relates to its total execution time. Let's start with the smallest case first. This graph's x-axis represents the scale of a parallel application (we can assume it is the number of machines utilized for this example). The y-axis represents the total execution time of the application.

Imagine we are domain scientists running some parallel analysis tool. With a scale of 1, our runtime will obviously be the slowest since we are not making use of the parallelism of the application. So we increase our scale to 10. Lo and behold, we see a marked decrease in the total execution time of the application!


So we try scaling up again. We go for broke and leap from a scale of 10 to 500. We notice our execution time is still decreasing! So, let's increase our scale one more time.


At a scale of 1,000 we see the limit to our scalability. Our total execution time has increased from the 500 scale execution. Why? There is a cost to acquiring and maintaining resources. For instance, we might have to start a virtual machine on every computer we use for our application. Starting up a VM takes time.


What we have failed to realize, however, is that we completely missed our optimum scale! The black line of the bottom graph shows the best execution time of this application (which occurs at a scale of 100). This is a key observation from the paper: though it is possible to manually re-run a parallel application with differing scales, it is highly probable we will not find the most appropriate scale to run our application such that our total execution time is minimized (the capacity of our application) unless our search for the optimum scale is exhaustive. This is an unrealistic expectation for most researchers since what matters most is the results of the analysis/simulation/etc. To make the lives of our users easier, we have implemented a lightweight model which does the heavy lifting of finding that appropriate scale for the user.



Sunday, November 11, 2018

Workflow Algebra and JX Language at e-Science 2018

Nick Hazekamp presented the paper An Algebra for Robust Workflow Transformations and Tim Shaffer presented a poster on A First Look at the JX Workflow Language at the IEEE International Conference on eScience 2018 October 27-November 1, 2018 in Amsterdam.

In An Algebra for Robust Workflow Transformations (paper slides) we introduce the an algebra for applying and nesting different task level transformations to a workflow. As a basis for this work we clearly defined what a task is and the task structure in JSON. Using this JSON representation, we explicitly show how tasks can be nested and give a method for deriving consistent behavior from these nested transformations. We showed how this worked using three use cases, multi-level nested transformations, multi-site workflow operation, and methods for using transformations to debug workflow failures. Abstract posted here:

Scientific workflows are often designed with a particular compute site in mind. As a user changes sites the workflow needs to adjust. These changes include moving from a cluster to a cloud, updating an operating system, or investigating failures on a new cluster. As a workflow is moved, its tasks do not fundamentally change, but the steps to configure, execute, and evaluate tasks differ. When handling these changes it may be necessary to use a script to analyze execution failure or run a container to use the correct operating system. To improve workflow portability and robustness, it is necessary to have a rigorous method that allows transformations on a workflow. These transformations do not change the tasks, only the way tasks are invoked. Using technologies such as containers, resource managers, and scripts to transform workflows allow for portability, but combining these technologies can lead to complications with execution and error handling. We define an algebra to reason about task transformations at the workflow level and express it in a declarative form using JSON. We implemented this algebra in the Makeflow workflow system and demonstrate how transformations can be used for resource monitoring, failure analysis, and software deployment across three sites.


In A First Look at the JX Workflow Language (paper poster) we took a look at JX and the flexibility it affords the user when describing the high-level characteristics of a workflow. Abstract posted here:

Abstract—Scientific workflows are typically expressed as a graph of logical tasks, each one representing a single program along with its input and output files. This poster introduces JX (JSON eXtended), a declarative language that can express complex workloads as an assembly of sub-graphs that can be partitioned in flexible ways. We present a case study of using JX to represent complex workflows for the Lifemapper biodiversity project. We evaluate partitioning approaches across several computing environments, including ND-Condor, IU-Jetstream, and SDSC-Comet, and show that a coarse partitioning results in faster turnaround times, reduced data transfer, and lower master utilization across all three systems.



Tuesday, October 2, 2018

Work Queue Visual Status

Check out the new Work Queue Status page by Nate Kremer-Herman.  This reveals a whole lot of information that was already reported to the global catalog in raw JSON, but was previously hard to interpret.  For any WQ application reporting itself to the global catalog (use the -N option)  you get a nice display of workers and tasks running and the total resources consumed across the application:

What's more, a pie chart shows a breakdown of the master is spending its time: sending data to workers, receiving data from workers, and polling (waiting) for workers to report are the main categories.  This tells you at a glance what the bottleneck of the system is.

This WQ master is spending most of its time sending data out to workers, so it's close to the limit of its scalability:
However, this one is spending most of its time polling for results, and only a small fraction sending.  It can likely handle many more workers:

This one is spending *all* of its time either receiving data from workers (completed tasks) or sending data to workers for new tasks.  It is completely occupied:
 


Wednesday, August 29, 2018

Announcement: CCTools 7.0.4 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.4 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some bug fixes. Among them:

  • [General] --without-static-libgcc flag to configure script for compilation on Stampede2. (Ben Tovar)
  • [WorkQueue] Consider workers across different factories. (Bo Marchman)
  • [WorkQueue] Input files from tasks that exhausted resources where not being removed from the worker. (Ben Tovar)
  • [Makeflow] Communicate cores, memory, and disk resources to SLURM, SGE, and Torque (Nick Hazekamp)
  • [ResourceMonitor] Fix bug when computing maximum cores. (Ben Tovar)
  • [JX] Improved parsing errors and documentation. (Tim Shaffer, Douglas Thain, Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Bo Marchman
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Kyle Sweeney
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, August 20, 2018

DISC REU Videos 2018

Our summer REU students in the DISC program produced an impressive set of videos describing their summer research projects -- check out the playlist!


Wednesday, July 25, 2018

VC3 - Virtual Clusters at PEARC 2018


The VC3 project (virtualclusters.org) allows end users to dynamically create virtual clusters with custom software and middleware, running on top of existing national computing facilities.  Using only standard login access to computing facilities, you can deploy a computing environment specialized for a complex application, and share it with your collaborators.

Today, Prof Thain is presenting the VC3 project at the PEARC 2018 conference, on behalf of the entire VC3 team:


Check out the service online at virtualclusters.org:


We are currently recruiting new users for beta testing, please sign up here:

Tuesday, July 24, 2018

Reproducibility in Scientific Computing

"Reproducibility in Scientific Computing", an article by (recent grad) Peter Ivie recently appeared in the journal ACM Computing Surveys.  This article gives a high level overview of the technical challenges inherent in making scientific computations reproducible, along with a survey of approaches to attacking these problems. Check it out!


Tuesday, July 10, 2018

Halfway Through 2018 Summer REU


We are a little more than halfway through the 2018 edition of our summer Data Intensive Scientific Computing REU program at the University of Notre Dame.  This summer, our students are working on projects in network science, genome analysis, workflow systems, data visualization, and more.  At this point in the summer, students are finalizing their results and starting to work on posters and videos to present what they have learned for our summer research symposium.

Thursday, July 5, 2018

Announcement: CCTools 7.0.0 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.0 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a major release which adds several features and bug fixes. Among them:

  • [General] Catalog updates compressed, and via TCP. (Douglas Thain, Nick Hazekamp, Ben Tovar)
  • [JX] Bug fixes to JX, a superset of JSON to dynamically describe workflows, see doc/jx-tutorial.html. (Tim Shaffer, Douglas Thain)
  • [Makeflow] Formally define and implement hooks to workflow rules. Hooks may be used to wrap rules with containers (e.g. singularity), a monitoring tool, etc.  (Nick Hazekamp, Tim Shaffer)
  • [Makeflow] Rule execution as Amazon Lambda functions and S3 objects. (Kyle Sweeney, Douglas Thain)
  • [Makeflow] Efficient shared file system access. (Nick Hazekamp)
  • [Makeflow] Several bug fixes for rules executing in Mesos. (Chao Zheng)
  • [ResourceMonitor] Several bug fixes. (Ben Tovar)
  • [WorkQueue] Add user-defined features to workers and tasks. (Nate Kremer-Herman)
  • [WorkQueue] Fixes for python3 support. (Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Kyle Sweeney
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, June 11, 2018

Papers at ScienceCloud Workshop

CCL grad student Kyle Sweeney is presenting two papers at the ScienceCloud/IWAC workshop at HPDC 2018.


Early Experience Using Amazon Batch for Scientific Workflows gives some of practical experience using Amazon Batch for scientific workflows, comparing the performance of straight EC2 virtual machines against Amazon Batch and overlaying the WorkQueue system on top of virtual machines.





Efficient Integration of Containers into Scientific Workflows explores different methods of composing container images into complex workflows, in order to make efficient use of shared filesystems and data movement.




Monday, June 4, 2018

CCL Internships at CERN and Alibaba

Two of our CCL grad students are off to internships this summer:

Nick Hazekamp will be in Geneva at CERN working with Jakob Blomer and the CVMFS group to develop methods for migrating high throughput workloads to HPC centers, while still making use of the global CVMFS filesystem.

Charles Zheng will be at Alibaba working on high throughput workloads running on container orchestration systems.


Tuesday, May 29, 2018

2018 DISC REU Kickoff

Our 2018 summer program in Data Intensive Scientific Computing (DISC) is underway at the University of Notre Dame.  Eleven students from all around the country are spending the summer working on challenging computing problems in fields such as high energy physics, neuroscience, epidemiology, species distribution, network science, high performance computing, and more.  Welcome to ND!


Friday, May 25, 2018

VC3 Project Limited Beta Opens

Ben Tovar gave a talk introducing the VC3 (Virtual Clusters for Community Computation) project at the annual HTCondor Week conference.

VC3 makes it easy for science groups to deploy custom software stacks across existing university clusters and national facilities.  For example, if you want to run your own private Condor pool across three clusters and share it with your collaborators, then VC3 is for you. 

We are now running VC3 as a "limited beta" for early adopters who would like to give it a try and send us feedback.  Check out the instructions and invitation to sign up.


Tuesday, May 22, 2018

Graduation 2018


It was a busy graduation weekend here at Notre Dame! The CSE department graduated nineteen PhD students, including CCL grads Dr. Peter Ivie and Dr. James Sweet.  Prof. Thain gave the graduation address at the CSE department ceremony. Congratulations and good luck to everyone!





Wednesday, April 25, 2018

VC3-Builder and WQ-MAKER at IC2E 2018

Ben Tovar presented the paper Automatic Dependency Management for Scientific Applications on Clusters and Nick Hazekamp presented the paper MAKER as a Service: Moving HPC applications to Jetstream Cloud at the IEEE International Conference on Cloud Engineering (IC2E 2018) on April 18, 2018 in Orlando, Florida.

In Automatic Dependency Management for Scientific workflows (paper slides) we introduce a tool for software environments deployments in clusters. This tool, called the vc3-builder, has minimal dependencies and a lightbootstrap, which allows it to be deployed along batch jobs. The vc3-builder then install any missing software using only user-privileges (e.g., no sudo) so that the actual user payload can be executed. The vc3-builder is being developed as part of the DOE funded Virtual Clusters for Community Computation (VC3) project, in which users can construct custom short-lived virtual clusters across different computational sites.

In MAKER as a Service: Moving HPC applications to Jetstream Cloud (paper poster slides) we discussed the lessons learn in migrating MAKER, a traditional HPC application, to the cloud. This focused on issues like recreating the software stack using VC3-Bulder, addressing the lack of shared filesystems and inter-node communications with Work Queue, and building the application focused on user feedback allowing for informed decisions in the cloud. Using WQ-MAKER we were able to run MAKER not only on Jetstream, but also resources from Notre Dame's Condor cluster. Below you can see the systems architecture.

Monday, March 12, 2018

CCL at CyVerse Container Camp

Nick Hazekamp and Kyle Sweeney gave a talk, "Distributed Computing with Makeflow and Work Queue", at the CyVerse Container Camp workshop. This talk gives an overview of Makeflow and Work Queue, with an emphasis on how we approach using containers in a workflow. Highlights of the talk focused on different approaches to applying containers, either per-task or workflow-level, and what that looks like in practice on Jetstream, an Openstack based cloud platform.

You can find the slides and tutorial at: CCL at CyVerse Container Camp

Here you can see the active participants: