Tuesday, June 11, 2019

Shrinkwrap Containers at CERN

Tim Shaffer attended the 2019 CVMFS Workshop and presented "Shrinkwrap: Creating HPC Containers", work done together with Nick Hazekamp.  Shrinkwrap is a tool that profiles applications using the CVMFS filesystem and generates a minimal container image with only the parts of the global filesystem that were actually used:


Since the LHC is currently shut down between runs, attendees were able to tour  the experiment hall and see the CMS detector up close!





Friday, March 15, 2019

Announcement: CCTools 7.0.11 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.11 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release which adds several bug fixes. Among them:

  • [General] Fix out-of-date man pages.
  • [Catalog] Add merge event to catalog database. (Douglas Thain)
  • [Catalog] Fix unary operator constant predicate bug. (Tim Shaffer)
  • [Makeflow] Specify custom SGE resources. (Nick Hazekamp)
  • [Makeflow] Use scheduler universe in condor_submit_makeflow. (Ben Tovar)
  • [WorkQueue] Compilation with python3.7 and anacodna. (Ben Tovar)
  • [WorkQueue] Specify port ranges when creating queue in python. (Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, February 11, 2019

Ph.D. Proposal: Nate Kremer-Herman

Congrads to Nate Kremer-Herman who passed his Ph.D. proposal, titled "Troubleshooting Distributed Applications Using a Graph Representation"


Tuesday, November 20, 2018

Parallel Application Capacity Paper at Supercomputing 2018

Nate Kremer-Herman presented the paper A Lightweight Model for Right-Sizing Master-Worker Applications at the ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (Supercomputing) on November 14, 2018 in Dallas, Texas. This year marked the 30th anniversary of the Supercomputing conference.


In A Lightweight Model for Right-Sizing Master-Worker Applications, we note that when running a parallel application at scale, a resource provisioning policy should minimize over-commitment (idle resources) and under-commitment (resource contention). However, users seldom know the quantity of resources to appropriately execute their application. Even with such knowledge, over- and under-commitment of resources may still occur because the application does not run in isolation. It shares resources such as network and filesystems.We formally define the capacity of a parallel application as the quantity of resources that may effectively be provisioned for the best execution time in an environment. We present a model to compute an estimate of the capacity of master-worker applications as they run based on execution and data-transfer times.

Although the model for provisioning these applications is important, a key insight from the paper comes from a diagram which demonstrates how a parallel application's scale relates to its total execution time. Let's start with the smallest case first. This graph's x-axis represents the scale of a parallel application (we can assume it is the number of machines utilized for this example). The y-axis represents the total execution time of the application.

Imagine we are domain scientists running some parallel analysis tool. With a scale of 1, our runtime will obviously be the slowest since we are not making use of the parallelism of the application. So we increase our scale to 10. Lo and behold, we see a marked decrease in the total execution time of the application!


So we try scaling up again. We go for broke and leap from a scale of 10 to 500. We notice our execution time is still decreasing! So, let's increase our scale one more time.


At a scale of 1,000 we see the limit to our scalability. Our total execution time has increased from the 500 scale execution. Why? There is a cost to acquiring and maintaining resources. For instance, we might have to start a virtual machine on every computer we use for our application. Starting up a VM takes time.


What we have failed to realize, however, is that we completely missed our optimum scale! The black line of the bottom graph shows the best execution time of this application (which occurs at a scale of 100). This is a key observation from the paper: though it is possible to manually re-run a parallel application with differing scales, it is highly probable we will not find the most appropriate scale to run our application such that our total execution time is minimized (the capacity of our application) unless our search for the optimum scale is exhaustive. This is an unrealistic expectation for most researchers since what matters most is the results of the analysis/simulation/etc. To make the lives of our users easier, we have implemented a lightweight model which does the heavy lifting of finding that appropriate scale for the user.



Sunday, November 11, 2018

Workflow Algebra and JX Language at e-Science 2018

Nick Hazekamp presented the paper An Algebra for Robust Workflow Transformations and Tim Shaffer presented a poster on A First Look at the JX Workflow Language at the IEEE International Conference on eScience 2018 October 27-November 1, 2018 in Amsterdam.

In An Algebra for Robust Workflow Transformations (paper slides) we introduce the an algebra for applying and nesting different task level transformations to a workflow. As a basis for this work we clearly defined what a task is and the task structure in JSON. Using this JSON representation, we explicitly show how tasks can be nested and give a method for deriving consistent behavior from these nested transformations. We showed how this worked using three use cases, multi-level nested transformations, multi-site workflow operation, and methods for using transformations to debug workflow failures. Abstract posted here:

Scientific workflows are often designed with a particular compute site in mind. As a user changes sites the workflow needs to adjust. These changes include moving from a cluster to a cloud, updating an operating system, or investigating failures on a new cluster. As a workflow is moved, its tasks do not fundamentally change, but the steps to configure, execute, and evaluate tasks differ. When handling these changes it may be necessary to use a script to analyze execution failure or run a container to use the correct operating system. To improve workflow portability and robustness, it is necessary to have a rigorous method that allows transformations on a workflow. These transformations do not change the tasks, only the way tasks are invoked. Using technologies such as containers, resource managers, and scripts to transform workflows allow for portability, but combining these technologies can lead to complications with execution and error handling. We define an algebra to reason about task transformations at the workflow level and express it in a declarative form using JSON. We implemented this algebra in the Makeflow workflow system and demonstrate how transformations can be used for resource monitoring, failure analysis, and software deployment across three sites.


In A First Look at the JX Workflow Language (paper poster) we took a look at JX and the flexibility it affords the user when describing the high-level characteristics of a workflow. Abstract posted here:

Abstract—Scientific workflows are typically expressed as a graph of logical tasks, each one representing a single program along with its input and output files. This poster introduces JX (JSON eXtended), a declarative language that can express complex workloads as an assembly of sub-graphs that can be partitioned in flexible ways. We present a case study of using JX to represent complex workflows for the Lifemapper biodiversity project. We evaluate partitioning approaches across several computing environments, including ND-Condor, IU-Jetstream, and SDSC-Comet, and show that a coarse partitioning results in faster turnaround times, reduced data transfer, and lower master utilization across all three systems.



Tuesday, October 2, 2018

Work Queue Visual Status

Check out the new Work Queue Status page by Nate Kremer-Herman.  This reveals a whole lot of information that was already reported to the global catalog in raw JSON, but was previously hard to interpret.  For any WQ application reporting itself to the global catalog (use the -N option)  you get a nice display of workers and tasks running and the total resources consumed across the application:

What's more, a pie chart shows a breakdown of the master is spending its time: sending data to workers, receiving data from workers, and polling (waiting) for workers to report are the main categories.  This tells you at a glance what the bottleneck of the system is.

This WQ master is spending most of its time sending data out to workers, so it's close to the limit of its scalability:
However, this one is spending most of its time polling for results, and only a small fraction sending.  It can likely handle many more workers:

This one is spending *all* of its time either receiving data from workers (completed tasks) or sending data to workers for new tasks.  It is completely occupied:
 


Wednesday, August 29, 2018

Announcement: CCTools 7.0.4 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.4 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some bug fixes. Among them:

  • [General] --without-static-libgcc flag to configure script for compilation on Stampede2. (Ben Tovar)
  • [WorkQueue] Consider workers across different factories. (Bo Marchman)
  • [WorkQueue] Input files from tasks that exhausted resources where not being removed from the worker. (Ben Tovar)
  • [Makeflow] Communicate cores, memory, and disk resources to SLURM, SGE, and Torque (Nick Hazekamp)
  • [ResourceMonitor] Fix bug when computing maximum cores. (Ben Tovar)
  • [JX] Improved parsing errors and documentation. (Tim Shaffer, Douglas Thain, Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Bo Marchman
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Kyle Sweeney
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, August 20, 2018

DISC REU Videos 2018

Our summer REU students in the DISC program produced an impressive set of videos describing their summer research projects -- check out the playlist!


Wednesday, July 25, 2018

VC3 - Virtual Clusters at PEARC 2018


The VC3 project (virtualclusters.org) allows end users to dynamically create virtual clusters with custom software and middleware, running on top of existing national computing facilities.  Using only standard login access to computing facilities, you can deploy a computing environment specialized for a complex application, and share it with your collaborators.

Today, Prof Thain is presenting the VC3 project at the PEARC 2018 conference, on behalf of the entire VC3 team:


Check out the service online at virtualclusters.org:


We are currently recruiting new users for beta testing, please sign up here:

Tuesday, July 24, 2018

Reproducibility in Scientific Computing

"Reproducibility in Scientific Computing", an article by (recent grad) Peter Ivie recently appeared in the journal ACM Computing Surveys.  This article gives a high level overview of the technical challenges inherent in making scientific computations reproducible, along with a survey of approaches to attacking these problems. Check it out!