Tuesday, September 17, 2019

Announcement: CCTools 7.0.17 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.17 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here.

This is a minor release which adds some bug fixes. Among them:

  • [General] Fix for linking issue when using a conda environment. (Tim Shaffer)
  • [General] Fix for compilation issue with python3 and old versions of swig. (Ben Tovar)
  • [Chirp] Fix for the perl test. (Zoe Surma)
  • [Makeflow] Fix typos in documentation. (Ryker Campbell)
  • [Parrot] Fix a PATH issue with local execution. (Ben Tovar)
  • [Parrot] LOCAL keyword in a mountlist acts now like --disable-service. (Tim Shaffer)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Ryker Campbell
  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Tim Shaffer
  • Zoe Surma
  • Douglas Thain
  • Ben Tovar
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Friday, August 23, 2019

Ph.D. Defense: Chao "Charles" Zheng


Congratulations to Dr. Chao "Charles" Zheng, who defended his Ph.D. thesis on "The Challenges of Scaling Up High Throughput Workflows with Container Technology".  He will shortly be leaving for California to join Alibaba, Inc.  Here is Charles (second from right) after the successful defense:





Tuesday, August 6, 2019

Summer REU Projects

In summer 2019, REU students TJ Dasso and Eamon Marmion worked at the CCL on large scale distributed applications.  They completed the integration between the Parsl workflow language and the Work Queue execution framework, demonstrating Python applications running on thousands of cores, and also streamlined our software installation process via Pip and Conda.  They presented their work a the summer REU poster symposium in Jordan Hall:



Wednesday, July 10, 2019

PhD Proposal: Tim Shaffer

Congrads to Tim Shaffer on passing the PhD candidacy proposal stage:
"Proactive Storage Management for High Throughput Scientific Workloads"

Tuesday, June 11, 2019

Shrinkwrap Containers at CERN

Tim Shaffer attended the 2019 CVMFS Workshop and presented "Shrinkwrap: Creating HPC Containers", work done together with Nick Hazekamp.  Shrinkwrap is a tool that profiles applications using the CVMFS filesystem and generates a minimal container image with only the parts of the global filesystem that were actually used:


Since the LHC is currently shut down between runs, attendees were able to tour  the experiment hall and see the CMS detector up close!





Friday, March 15, 2019

Announcement: CCTools 7.0.11 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.11 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release which adds several bug fixes. Among them:

  • [General] Fix out-of-date man pages.
  • [Catalog] Add merge event to catalog database. (Douglas Thain)
  • [Catalog] Fix unary operator constant predicate bug. (Tim Shaffer)
  • [Makeflow] Specify custom SGE resources. (Nick Hazekamp)
  • [Makeflow] Use scheduler universe in condor_submit_makeflow. (Ben Tovar)
  • [WorkQueue] Compilation with python3.7 and anacodna. (Ben Tovar)
  • [WorkQueue] Specify port ranges when creating queue in python. (Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, February 11, 2019

Ph.D. Proposal: Nate Kremer-Herman

Congrads to Nate Kremer-Herman who passed his Ph.D. proposal, titled "Troubleshooting Distributed Applications Using a Graph Representation"


Tuesday, November 20, 2018

Parallel Application Capacity Paper at Supercomputing 2018

Nate Kremer-Herman presented the paper A Lightweight Model for Right-Sizing Master-Worker Applications at the ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (Supercomputing) on November 14, 2018 in Dallas, Texas. This year marked the 30th anniversary of the Supercomputing conference.


In A Lightweight Model for Right-Sizing Master-Worker Applications, we note that when running a parallel application at scale, a resource provisioning policy should minimize over-commitment (idle resources) and under-commitment (resource contention). However, users seldom know the quantity of resources to appropriately execute their application. Even with such knowledge, over- and under-commitment of resources may still occur because the application does not run in isolation. It shares resources such as network and filesystems.We formally define the capacity of a parallel application as the quantity of resources that may effectively be provisioned for the best execution time in an environment. We present a model to compute an estimate of the capacity of master-worker applications as they run based on execution and data-transfer times.

Although the model for provisioning these applications is important, a key insight from the paper comes from a diagram which demonstrates how a parallel application's scale relates to its total execution time. Let's start with the smallest case first. This graph's x-axis represents the scale of a parallel application (we can assume it is the number of machines utilized for this example). The y-axis represents the total execution time of the application.

Imagine we are domain scientists running some parallel analysis tool. With a scale of 1, our runtime will obviously be the slowest since we are not making use of the parallelism of the application. So we increase our scale to 10. Lo and behold, we see a marked decrease in the total execution time of the application!


So we try scaling up again. We go for broke and leap from a scale of 10 to 500. We notice our execution time is still decreasing! So, let's increase our scale one more time.


At a scale of 1,000 we see the limit to our scalability. Our total execution time has increased from the 500 scale execution. Why? There is a cost to acquiring and maintaining resources. For instance, we might have to start a virtual machine on every computer we use for our application. Starting up a VM takes time.


What we have failed to realize, however, is that we completely missed our optimum scale! The black line of the bottom graph shows the best execution time of this application (which occurs at a scale of 100). This is a key observation from the paper: though it is possible to manually re-run a parallel application with differing scales, it is highly probable we will not find the most appropriate scale to run our application such that our total execution time is minimized (the capacity of our application) unless our search for the optimum scale is exhaustive. This is an unrealistic expectation for most researchers since what matters most is the results of the analysis/simulation/etc. To make the lives of our users easier, we have implemented a lightweight model which does the heavy lifting of finding that appropriate scale for the user.



Sunday, November 11, 2018

Workflow Algebra and JX Language at e-Science 2018

Nick Hazekamp presented the paper An Algebra for Robust Workflow Transformations and Tim Shaffer presented a poster on A First Look at the JX Workflow Language at the IEEE International Conference on eScience 2018 October 27-November 1, 2018 in Amsterdam.

In An Algebra for Robust Workflow Transformations (paper slides) we introduce the an algebra for applying and nesting different task level transformations to a workflow. As a basis for this work we clearly defined what a task is and the task structure in JSON. Using this JSON representation, we explicitly show how tasks can be nested and give a method for deriving consistent behavior from these nested transformations. We showed how this worked using three use cases, multi-level nested transformations, multi-site workflow operation, and methods for using transformations to debug workflow failures. Abstract posted here:

Scientific workflows are often designed with a particular compute site in mind. As a user changes sites the workflow needs to adjust. These changes include moving from a cluster to a cloud, updating an operating system, or investigating failures on a new cluster. As a workflow is moved, its tasks do not fundamentally change, but the steps to configure, execute, and evaluate tasks differ. When handling these changes it may be necessary to use a script to analyze execution failure or run a container to use the correct operating system. To improve workflow portability and robustness, it is necessary to have a rigorous method that allows transformations on a workflow. These transformations do not change the tasks, only the way tasks are invoked. Using technologies such as containers, resource managers, and scripts to transform workflows allow for portability, but combining these technologies can lead to complications with execution and error handling. We define an algebra to reason about task transformations at the workflow level and express it in a declarative form using JSON. We implemented this algebra in the Makeflow workflow system and demonstrate how transformations can be used for resource monitoring, failure analysis, and software deployment across three sites.


In A First Look at the JX Workflow Language (paper poster) we took a look at JX and the flexibility it affords the user when describing the high-level characteristics of a workflow. Abstract posted here:

Abstract—Scientific workflows are typically expressed as a graph of logical tasks, each one representing a single program along with its input and output files. This poster introduces JX (JSON eXtended), a declarative language that can express complex workloads as an assembly of sub-graphs that can be partitioned in flexible ways. We present a case study of using JX to represent complex workflows for the Lifemapper biodiversity project. We evaluate partitioning approaches across several computing environments, including ND-Condor, IU-Jetstream, and SDSC-Comet, and show that a coarse partitioning results in faster turnaround times, reduced data transfer, and lower master utilization across all three systems.



Tuesday, October 2, 2018

Work Queue Visual Status

Check out the new Work Queue Status page by Nate Kremer-Herman.  This reveals a whole lot of information that was already reported to the global catalog in raw JSON, but was previously hard to interpret.  For any WQ application reporting itself to the global catalog (use the -N option)  you get a nice display of workers and tasks running and the total resources consumed across the application:

What's more, a pie chart shows a breakdown of the master is spending its time: sending data to workers, receiving data from workers, and polling (waiting) for workers to report are the main categories.  This tells you at a glance what the bottleneck of the system is.

This WQ master is spending most of its time sending data out to workers, so it's close to the limit of its scalability:
However, this one is spending most of its time polling for results, and only a small fraction sending.  It can likely handle many more workers:

This one is spending *all* of its time either receiving data from workers (completed tasks) or sending data to workers for new tasks.  It is completely occupied: