Sunday, November 11, 2018

Workflow Algebra and JX Language at e-Science 2018

Nick Hazekamp presented the paper An Algebra for Robust Workflow Transformations and Tim Shaffer presented a poster on A First Look at the JX Workflow Language at the IEEE International Conference on eScience 2018 October 27-November 1, 2018 in Amsterdam.

In An Algebra for Robust Workflow Transformations (paper slides) we introduce the an algebra for applying and nesting different task level transformations to a workflow. As a basis for this work we clearly defined what a task is and the task structure in JSON. Using this JSON representation, we explicitly show how tasks can be nested and give a method for deriving consistent behavior from these nested transformations. We showed how this worked using three use cases, multi-level nested transformations, multi-site workflow operation, and methods for using transformations to debug workflow failures. Abstract posted here:

Scientific workflows are often designed with a particular compute site in mind. As a user changes sites the workflow needs to adjust. These changes include moving from a cluster to a cloud, updating an operating system, or investigating failures on a new cluster. As a workflow is moved, its tasks do not fundamentally change, but the steps to configure, execute, and evaluate tasks differ. When handling these changes it may be necessary to use a script to analyze execution failure or run a container to use the correct operating system. To improve workflow portability and robustness, it is necessary to have a rigorous method that allows transformations on a workflow. These transformations do not change the tasks, only the way tasks are invoked. Using technologies such as containers, resource managers, and scripts to transform workflows allow for portability, but combining these technologies can lead to complications with execution and error handling. We define an algebra to reason about task transformations at the workflow level and express it in a declarative form using JSON. We implemented this algebra in the Makeflow workflow system and demonstrate how transformations can be used for resource monitoring, failure analysis, and software deployment across three sites.


In A First Look at the JX Workflow Language (paper poster) we took a look at JX and the flexibility it affords the user when describing the high-level characteristics of a workflow. Abstract posted here:

Abstract—Scientific workflows are typically expressed as a graph of logical tasks, each one representing a single program along with its input and output files. This poster introduces JX (JSON eXtended), a declarative language that can express complex workloads as an assembly of sub-graphs that can be partitioned in flexible ways. We present a case study of using JX to represent complex workflows for the Lifemapper biodiversity project. We evaluate partitioning approaches across several computing environments, including ND-Condor, IU-Jetstream, and SDSC-Comet, and show that a coarse partitioning results in faster turnaround times, reduced data transfer, and lower master utilization across all three systems.



Tuesday, October 2, 2018

Work Queue Visual Status

Check out the new Work Queue Status page by Nate Kremer-Herman.  This reveals a whole lot of information that was already reported to the global catalog in raw JSON, but was previously hard to interpret.  For any WQ application reporting itself to the global catalog (use the -N option)  you get a nice display of workers and tasks running and the total resources consumed across the application:

What's more, a pie chart shows a breakdown of the master is spending its time: sending data to workers, receiving data from workers, and polling (waiting) for workers to report are the main categories.  This tells you at a glance what the bottleneck of the system is.

This WQ master is spending most of its time sending data out to workers, so it's close to the limit of its scalability:
However, this one is spending most of its time polling for results, and only a small fraction sending.  It can likely handle many more workers:

This one is spending *all* of its time either receiving data from workers (completed tasks) or sending data to workers for new tasks.  It is completely occupied:
 


Wednesday, August 29, 2018

Announcement: CCTools 7.0.4 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.4 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some bug fixes. Among them:

  • [General] --without-static-libgcc flag to configure script for compilation on Stampede2. (Ben Tovar)
  • [WorkQueue] Consider workers across different factories. (Bo Marchman)
  • [WorkQueue] Input files from tasks that exhausted resources where not being removed from the worker. (Ben Tovar)
  • [Makeflow] Communicate cores, memory, and disk resources to SLURM, SGE, and Torque (Nick Hazekamp)
  • [ResourceMonitor] Fix bug when computing maximum cores. (Ben Tovar)
  • [JX] Improved parsing errors and documentation. (Tim Shaffer, Douglas Thain, Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Bo Marchman
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Kyle Sweeney
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, August 20, 2018

DISC REU Videos 2018

Our summer REU students in the DISC program produced an impressive set of videos describing their summer research projects -- check out the playlist!


Wednesday, July 25, 2018

VC3 - Virtual Clusters at PEARC 2018


The VC3 project (virtualclusters.org) allows end users to dynamically create virtual clusters with custom software and middleware, running on top of existing national computing facilities.  Using only standard login access to computing facilities, you can deploy a computing environment specialized for a complex application, and share it with your collaborators.

Today, Prof Thain is presenting the VC3 project at the PEARC 2018 conference, on behalf of the entire VC3 team:


Check out the service online at virtualclusters.org:


We are currently recruiting new users for beta testing, please sign up here:

Tuesday, July 24, 2018

Reproducibility in Scientific Computing

"Reproducibility in Scientific Computing", an article by (recent grad) Peter Ivie recently appeared in the journal ACM Computing Surveys.  This article gives a high level overview of the technical challenges inherent in making scientific computations reproducible, along with a survey of approaches to attacking these problems. Check it out!


Tuesday, July 10, 2018

Halfway Through 2018 Summer REU


We are a little more than halfway through the 2018 edition of our summer Data Intensive Scientific Computing REU program at the University of Notre Dame.  This summer, our students are working on projects in network science, genome analysis, workflow systems, data visualization, and more.  At this point in the summer, students are finalizing their results and starting to work on posters and videos to present what they have learned for our summer research symposium.

Thursday, July 5, 2018

Announcement: CCTools 7.0.0 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.0 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, Umbrella, Prune, SAND, All-Pairs, Weaver, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a major release which adds several features and bug fixes. Among them:

  • [General] Catalog updates compressed, and via TCP. (Douglas Thain, Nick Hazekamp, Ben Tovar)
  • [JX] Bug fixes to JX, a superset of JSON to dynamically describe workflows, see doc/jx-tutorial.html. (Tim Shaffer, Douglas Thain)
  • [Makeflow] Formally define and implement hooks to workflow rules. Hooks may be used to wrap rules with containers (e.g. singularity), a monitoring tool, etc.  (Nick Hazekamp, Tim Shaffer)
  • [Makeflow] Rule execution as Amazon Lambda functions and S3 objects. (Kyle Sweeney, Douglas Thain)
  • [Makeflow] Efficient shared file system access. (Nick Hazekamp)
  • [Makeflow] Several bug fixes for rules executing in Mesos. (Chao Zheng)
  • [ResourceMonitor] Several bug fixes. (Ben Tovar)
  • [WorkQueue] Add user-defined features to workers and tasks. (Nate Kremer-Herman)
  • [WorkQueue] Fixes for python3 support. (Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Nathaniel Kremer-Herman
  • Nicholas Hazekamp
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar
  • Kyle Sweeney
  • Chao Zheng

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Monday, June 11, 2018

Papers at ScienceCloud Workshop

CCL grad student Kyle Sweeney is presenting two papers at the ScienceCloud/IWAC workshop at HPDC 2018.


Early Experience Using Amazon Batch for Scientific Workflows gives some of practical experience using Amazon Batch for scientific workflows, comparing the performance of straight EC2 virtual machines against Amazon Batch and overlaying the WorkQueue system on top of virtual machines.





Efficient Integration of Containers into Scientific Workflows explores different methods of composing container images into complex workflows, in order to make efficient use of shared filesystems and data movement.




Monday, June 4, 2018

CCL Internships at CERN and Alibaba

Two of our CCL grad students are off to internships this summer:

Nick Hazekamp will be in Geneva at CERN working with Jakob Blomer and the CVMFS group to develop methods for migrating high throughput workloads to HPC centers, while still making use of the global CVMFS filesystem.

Charles Zheng will be at Alibaba working on high throughput workloads running on container orchestration systems.