Cooperative Computing Lab News

Friday, December 18, 2020

CCTools version 7.1.12 released

 The Cooperative Computing Lab is pleased to announce the release of version 7.1.12 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a bug fix release:

  • [Batch interface] Adds sge_submit_workers to installed scripts directory. (Ben Tovar)
  • [Batch interface] Adds LSF as a batch type. (Douglas Thain)


Thanks goes to the contributors for many features, bug fixes, and tests:

  • Ben Tovar
  • Cami Carballo
  • Douglas Thain
  • Nathaniel Kremer-Herman
  • Thanh Son Phung
  • Tim Shaffer



Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Posted by Benjamin Tovar at 8:16 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, December 15, 2020

OpenTopography + EEMT + Makeflow

The OpenTopography service provides online access to geospatial data and computational tools in support of earth sciences.  The Effective Energy and Mass Transfer (EEMT) tool allows for computations of energy transfer in the Earth's critical zone, taking into account topography, vegetation, weather, and so forth.  To scale these computations up to large clusters, the CCL's Makeflow and Work Queue frameworks are employed to construct large scale parallel workflows at the touch of a button from the OpenTopography website. 

Source: Tyson Swetnam, University of Arizona

Posted by Douglas Thain at 10:25 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Labels: highlights

Analyzing Agriculture with Work Queue

The Field Scanalyzer at the University of Arizona is a massive robot that uses sensors, cameras, and GPS devices to collect vast quantities of agricultural data from crop fields.  In the background, distributed computing and deep learning techniques are used to understand and improve agricultural efficiencies in hot, dry, climates.  Processing all this data requires reliable computation on large clusters: the PhytoOracle software from the Lyons Lab at UA makes this possible, building on the Work Queue software from the Cooperative Computing Lab at Notre Dame.

- Source: Eric Lyons University of Arizona

 

Posted by Douglas Thain at 9:15 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Labels: highlights

Now Recruiting Students

Research Opportunities in the Cooperative Computing Lab

Join the CCL team and work on challenging problems in the realm of parallel and distributed systems! We work closely with collaborators in physics, molecular dynamics, machine learning, and other fields to build systems that scale to tens of thousands of cores on national infrastructure such as clusters, clouds, and grids. We publish open source software that is used around the world.

We currently have positions for undergraduate, M.S. and Ph.D students.  For more information, see our lab web page.  To apply, send a resume and brief email cover letter to Prof. Douglas Thain (dthain@nd.edu).
 



Posted by Douglas Thain at 7:22 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, October 12, 2020

CCTools version 7.1.9 released

The Cooperative Computing Lab is pleased to announce the release of version 7.1.9 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a bug release with some new features and bug fixes. Among them:

  • [Batch] Improve missing jobs detection on slurm, torque, sge, pbs. (Ben Tovar)
  • [Batch] WALL_TIME as a resource for slurm. (Ben Tovar)
  • [Makeflow] Several fixes for nested workflows. (Ben Tovar)
  • [Makeflow] Warn on redefinition of resources. (Ben Tovar)
  • [Resource Monitor] --measure-only flag when limits are specified. (Ben Tovar)
  • [Work Queue] API to define minimum resources for a category. (Ben Tovar)


Thanks goes to the contributors for many features, bug fixes, and tests:

  • Ben Tovar
  • Cami Carballo
  • Douglas Thain
  • HDsky
  • Nathaniel Kremer-Herman
  • Stefano Mangiola
  • Tanner Judeman
  • Tim Shaffer


Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!

Posted by Benjamin Tovar at 7:36 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, September 22, 2020

Autoscaling HTC at CLUSTER 2020

Recent CCL graduate Charles Zheng, Ph.D., presented his paper "Autoscaling High Throughput Workloads on Container Orchestrators" at the CLUSTER 2020 conference in September 2020.

In this paper, we explore the problem of how many machines to acquire for a high-throughput workload of known size when running on a container orchestrator like Kubernetes.

Most approaches to autoscaling are designed to scaling up web servers, or other services that respond to some unknown external request.  Generally, the autoscaler looks at some metric such as CPU utilization, and scales resources up or down in order to achieve some target like 90% CPU utilization.

However, when running a high throughput workload of, say, one thousand simulation runs, the situation is different.  First off, high CPU utilization is the norm: the simulator is likely to peg the CPU at 100% utilization, and adding or removing nodes isn't going to affect simulation.  And second, the offered load is not a mystery: we are in control of the workload, so we have some idea of the total size of the workload, or at least the number of jobs currently in the queue.

To address this, Charles built a High Throughput Autoscaler (HTA) that interfaces the Makeflow workflow system with the Kubernetes container orchestrator:

To learn more, check out the paper and accompanying video:

Chao Zheng, Nathaniel Kremer-Herman, Tim Shaffer, and Douglas Thain, Autoscaling High Throughput Workloads on Container Orchestrators, IEEE Conference on Cluster Computing, pages 1-10, September, 2020. 




Posted by Douglas Thain at 9:48 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, August 24, 2020

CCTools version 7.1.7 released

 

The Cooperative Computing Lab is pleased to announce the release of version 7.1.7 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/community/forum


This is a bug release with some new features and bug fixes. Among them:

  • [Batch] Set number of MPI processes for SLURM. (Ben Tovar)
  • [General] Use the right signature when overriding gettimeofday. (Tim Shaffer)
  • [Resource Monitor] Add context-switch count to final summary. (Ben Tovar)
  • [Resource Monitor] Fix kbps to Mbps typo in final summary. (Ben Tovar)
  • [WorkQueue] Update example apps to python3. (Douglas Thain)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Ben Tovar
  • Cami Carballo
  • Douglas Thain
  • Nathaniel Kremer-Herman
  • Tanner Juedeman
  • Tim Shaffer

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum


Enjoy!






Posted by Benjamin Tovar at 9:25 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Friday, August 14, 2020

Resource usage histograms for Work Queue using python's pandas+matplotlib

Work Queue is a framework to write and execute master-worker applications. A master process that can be written in python, perl, or C generates the tasks that then can be remotely executed by worker processes. You can learn more about Work Queue here.

Work Queue can automatically measure the resources, such as cores, memory, disk, and network bandwidth, used by each task. In python, this is enabled as:

import work_queue as wq
q = wq.WorkQueue(port=9123)
q.enable_monitoring()


    The resources measured are available as part of the task structure:

# wait for 5 seconds for a task to finish
t = q.wait(5)
if t:
    print("Task {id} measured memory: {memory} MB" 
            .format(id=t.id, memory=t.resources_measured.memory))

    The resources measured are also written to Work Queue's transaction log. This log can be enabled when declaring the master's queue:

import work_queue as wq
q = wq.WorkQueue(port=9123, transactions_log='my_wq_trans.log')
q.enable_monitoring()

    This log is also generated by Makeflow when using Work Queue as a batch system (-Twq).

    The resource information per task appears as a json object in the transactions marked as DONE end-state exit-code resource-exhausted resources-measured. Here is an example of how a DONE transaction looks like:

1595431788501342 10489 TASK 1 DONE SUCCESS  0  {} {"cores": 2, ...}

    With a regular expression incantation, we can extract the resource information into python's pandas. Say, for example, that we are interested in the memory and bandwidth distribution among the executed tasks. We can read these resources as follows:

import json
import re
import pandas as pd
import matplotlib.pyplot as plt

# the list of the resources we are interested in
resources = 'memory bandwidth'.split()
df = pd.DataFrame(columns=resources)

input_file = 'my_wq_trans.log'

with open(input_file) as input:
    for line in input:
        # timestamp master-pid TASK id (continue next line)
        # DONE SUCCESS exit-code exceeded measured
        m = re.match('\d+\s+\d+\s+TASK\s+\d+\s+'
                     'DONE\s+SUCCESS\s+0\s+{}\s+({.*})\s*$', line)
        if not m:
            continue

        # the resources are captured in the first (only pair
        # of parentheses) group used:
        s = json.loads(m.group(1))

        # append the new resources to the panda's data frame.
        # Resources are represented in a json array as
        # [value, "unit", such as [1000, "MB"],
        # so we only take the first element of the array:
        df.loc[len(df)] = list(s[r][0] for r in resources)


    For a quick view, we can directly use panda's histogram method:

df.hist()
plt.show()
 
 

 However, we can use matplotlib's facilities for subfigures and add titles,
units, etc. to the histograms:

# size width x height in inches
fig = plt.figure(figsize=(5,2))

# 1 row, 2 columns, 1st figure of the array
mem = plt.subplot(121)
mem.set_title('memory in MB')
mem.set_ylabel('task count')
mem.hist(df['memory'], range=(0,100))

# 1 row, 2 columns, 2nd figure of the array
mbp = plt.subplot(122)
mbp.set_title('bandwidth in Mbps')
mbp.hist(df['bandwidth'], range=(0,1200))

fig.savefig(input_file + '.png') 
 
 
 
 
 
(credit: Python code highlight generated by: http://hilite.me)
 
Posted by Benjamin Tovar at 8:51 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, August 13, 2020

Tim Shaffer Awarded DOE Fellowship

CCL grad student Tim Shaffer was recently awarded a DOE SCGSR fellowship for his work titled "Enabling Distributed HPC for Loosely‐Coupled Dataflow Applications".  He will be working with Ian Foster and Kyle Chard at Argonne National Lab on data intensive applications that combine the Parsl system from Argonne and the Work Queue runtime from Notre Dame.  Congratulations Tim!


Posted by Douglas Thain at 12:35 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

WRENCH Simulation of Work Queue

Our colleagues Henri Casanova (U Hawaii) and Rafael Ferreira da Silva (USC), along with their students, have recently published a paper highlighting their work in the WRENCH project.  The have constructed a series of simulators have model the behavior of distributed systems, for the purposes of both performance prediction and education.

In their paper "Developing accurate and scalable simulators of production workflow management systems with WRENCH" the describe simulators that correspond the the Pegasus workflow management system and our own Work Queue distributed execution framework.

Of course, any simulation is an imperfect approximation of a real system, but what's interesting about the WRENCH simulations is that they allow us to verify the basic assumptions and behavior of a software implementation.  In this example, the real system and the simulation show the same overall behavior, except that the real system has a stair-step behavior:


So, does that mean the simulation is "wrong"?  Not really!  In this case, the software is showing an undesirable behavior that is due either to incorrect logging or possibly a convoy effect.  In short, the simulation helps us to find a bug relative to the "ideal" design.  Nice!

https://www.sciencedirect.com/science/article/pii/S0167739X19317431

Posted by Douglas Thain at 12:20 PM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, July 23, 2020

Coffea + Work Queue Presentation at PyHEP 2020

CCL grad student Cami Carballo gave an interactive notebook talk on scaling up data analysis workloads at the PyHEP 2020 conference on Python for high energy physics.

This Python notebook (Integrating-Coffea-and-WorkQueue.ipynb) demonstrates the combination of the Coffea data analysis framework running on the Work Queue distributed execution system, all packaged up within a Jupyter notebook.


A particular challenge in cluster environments is making sure that the remote execution nodes have the proper Python execution environment needed by the end user.  Scientific applications change quickly, and so it's important to have exactly the right Python interpreter along with the precise set of libraries (Python and native) installed.  To accomplish this, the Coffea-WorkQueue module performs a static analysis of the dependencies needed by an application, and ships them along with the remote tasks, deploying them as needed so that multiple independent applications can run simultaneously on the cluster.


Coffea + Work Queue is under active development as we continue to tune and scale the combined system.


Posted by Douglas Thain at 9:55 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Troubleshooting at PEARC 2020

CCL grad student Nate Kremer-Herman presented his work on troubleshooting distributed systems at the PEARC 2020 conference:

  • Nathaniel Kremer-Herman and Douglas Thain, Log Discovery for Troubleshooting Open Distributed Systems with TLQ, Practice and Experience of Advanced Research Computing (PEARC), July, 2020. 
Abstract:

Troubleshooting a distributed system can be incredibly difficult. It is rarely feasible to expect a user to know the fine-grained interactions between their system and the environment configuration of each machine used in the system. Because of this, work can grind to a halt when a seemingly trivial detail changes. To address this, there is a plethora of state-of-the-art log analysis tools, debuggers, and visualization suites. However, a user may be executing in an open distributed system where the placement of their components are not known before runtime. This makes the process of tracking debug logs almost as difficult as troubleshooting the failures these logs have recorded because the location of those logs is usually not transparent to the user (and by association the troubleshooting tools they are using). We present TLQ, a framework designed from first principles for log discovery to enable troubleshooting of open distributed systems. TLQ consists of a querying client and a set of servers which track relevant debug logs spread across an open distributed system. Through a series of examples, we demonstrate how TLQ enables users to discover the locations of their system’s debug logs and in turn use well-defined troubleshooting tools upon those logs in a distributed fashion. Both of these tasks were previously impractical to ask of an open distributed system without significant a priori knowledge. We also concretely verify TLQ’s effectiveness by way of a production system: a biodiversity scientific workflow. We note the potential storage and performance overheads of TLQ compared to a centralized, closed system approach.

Posted by Douglas Thain at 9:27 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Container Management at IPDPS 2020

CCL grad student Tim Shaffer recently presented his recent work on container management at IPDPS 2020:

Container technologies are seeing wider use at advanced computing facilities for managing highly complex applications that must execute at multiple sites. However, in a distributed high throughput computing setting, the unrestricted use of containers can result in the container explosion problem. If a new container image is generated for each variation of a job dispatched to a site, shared storage is soon exceeded. On the other hand, if a single large container image is used to meet multiple needs, the size of that container may become a problem for storage and transport. To address this problem, we observe that many containers have an internal structure generated by a structured package manager, and this information could be used to strategically combine and share container images. We develop LANDLORD to exploit this property and evaluate its performance through a combination of simulation studies and empirical measurement of high energy physics applications.

  • Tim Shaffer, Nicholas Hazekamp, Jakob Blomer, and Douglas Thain, "Solving the Container Explosion Problem for Distributed High Throughput Computing" International Parallel and Distributed Processing Symposium, May, 2020.




Posted by Douglas Thain at 9:25 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, June 15, 2020

CCTools version 7.1.6 released

The Cooperative Computing Lab is pleased to announce the release of version 7.1.6 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a bug release with some new features and bug fixes. Among them:

  • [Resource Monitor] Fix race condition that caused an overhead for very short running processes. (Ben Tovar)
  • [WorkQueue] Efficient recursive transfer of directories. (Douglas Thain)
  • [WorkQueue] Several work_queue_graph_log bug fixes. (Cami Carballo)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Cami Carballo
  • Nathaniel Kremer-Herman
  • Tim Shaffer
  • Douglas Thain
  • Ben Tovar

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!
Posted by Benjamin Tovar at 10:23 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, May 14, 2020

REU Project: Coffea + Work Queue

This spring, undergraduate researchers Zoe Surma and Emily Strout worked to integrate the Coffea data analysis framework for high energy physics with the Work Queue distributed execution framework from the CCL.  Zoe presented initial results from testing at the Coffea user's meeting last week.  We are seeing some good scaling behavior, but there is still some tuning needed to go to large scale.  Many thanks to Lindsay Grey and the Coffea team for help and support.  The next step is to get this up and running on the CMS HTCondor cluster at Notre Dame!


Posted by Douglas Thain at 7:39 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, May 4, 2020

CCTools version 7.1.5 released

The Cooperative Computing Lab is pleased to announce the release of version 7.1.5 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some new features and bug fixes. Among them:

  • [General]   Support scripts to execute tasks inside python environments. (Tim Shaffer, T.J. Dasso)
  • [General]   Fix problem with python 2.7 bindings and unicode support. (Ben Tovar)
  • [Parrot]    Kill all processes when initial ptrace attachment fails. (Ben Tovar)
  • [WorkQueue] Adds work_queue_factory python interface. (Tim Shaffer)

Thanks goes to the contributors for many features, bug fixes, and tests:

  • Cami Carballo
  • T.J. Dasso
  • Nathaniel Kremer-Herman
  • Tim Shaffer
  • Emily Strout
  • Zoe Surma
  • Douglas Thain
  • Ben Tovar
  • Yifan Yu

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!
Posted by Benjamin Tovar at 8:10 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, April 7, 2020

CCTools version 7.1.2 released

The Cooperative Computing Lab is pleased to announce the release of version 7.1.2 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a bug fix release:

  • [Batch interface] Handle new date format in HTCondor for Makeflow and Work Queue factory. (Greg Thain)

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!
Posted by Benjamin Tovar at 11:14 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, March 17, 2020

CCTools 7.1.0 released

The Cooperative Computing Lab is pleased to announce the release of version 7.1.0 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some new features and bug fixes. Among them:

  • [General]   Documentation available in https://cctools.readthedocs.io (Ben Tovar, Douglas Thain)
  • [General]   Installation via conda: conda install -c conda-forge ndcctools (Ben Tovar)
  • [General]   Installation via spack: spack install cctools (Tanner Juedeman)
  • [General]   Several fixes for the batch job interface -T. (Ben Tovar)
  • [JX]        New template("{VAR} ...") function to construct strings. (Tim Shaffer)
  • [JX]        Improved parsing. (Tim Shaffer, Douglas Thain)
  • [Makeflow]  Support for sub-makeflows. (Douglas Thain)
  • [Makeflow]  Wrappers and hooks facility clean-up. (Nicholas Hazekamp, Tim Shaffer)
  • [WorkQueue] Fix status connections being count as workers. (Ben Tovar)
  • [Resource Monitor] Measurements of single python functions. (Ben Tovar)

Thanks goes to the contributors for many features, bug fixes, and tests:

Cami Carballo
T.J. Dasso
Nathaniel Kremer-Herman
Nicholas Hazekamp
Tanner Juedeman
Tim Shaffer
Emily Strout
Zoe Surma
Douglas Thain
Ben Tovar
Yifan Yu

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!
Posted by Benjamin Tovar at 9:05 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Wednesday, January 29, 2020

Announcement: CCTools version 7.0.22 released

The Cooperative Computing Lab is pleased to announce the release of version 7.0.22 of the Cooperative Computing Tools including Parrot, Chirp, JX, Makeflow, WorkQueue, and other software.

The software may be downloaded here:
http://ccl.cse.nd.edu/software/download

This is a minor release with some new features and bug fixes. Among them:

  • [WorkQueue] Worker warning when files cannot be executed in the scratch directory. (Ben Tovar)
  • [WorkQueue] Status connections are no longer counted as available workers. (Ben Tovar)
  • [Makeflow] Fix issue with using quotes ("...",'...') when specifying files. (Ben Tovar)
  • [Makeflow] Fix makeflow_monitor race condition on partially written log lines. (Ben Tovar)


Thanks goes to the contributors for many features, bug fixes, and tests:

Camila Carballo
T.J. Dasso
Nathaniel Kremer-Herman
Nicholas Hazekamp
Tanner Juedeman
Ryker McIntyre
Tim Shaffer
Francis Schickel
Zoe Surma
Douglas Thain
Ben Tovar
Yifan Yu

Please send any feedback to the CCTools discussion mailing list:

http://ccl.cse.nd.edu/community/forum

Enjoy!
Posted by Benjamin Tovar at 9:20 AM No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

About the CCL

At the University of Notre Dame, we design software that enables computing on thousands of machines at once in order to enable new discoveries through computing in fields such as physics, chemistry, bioinformatics, biometrics, and data mining.

See our main web site for software, publications, and much more information.

Labels

abstractions (9) active storage (1) allpairs (4) amazon (1) assembly language (1) atlas (1) audit trail (1) big data (1) bigdata (1) biocompute (2) biodiversity (1) bioinformatics (3) bxgrid (5) chirp (5) classify (1) cloud computing (4) cluster computing (2) cms (4) condor (16) confuga (1) cvmfs (4) cyclecomputing (1) data mining (2) deltadb (1) disc (1) distributed computing (8) elastic applications (1) elections (1) enavis (1) ethernet (1) eucalyptus (1) fault tolerance (2) forcebalance (1) genome assembly (2) green cloud (1) grid computing (5) grid heating (2) hadoop (3) hep (4) highlights (18) icecube (1) java (1) lidar (1) linking (1) linux (1) lobster (3) lockdown (1) log file (1) makeflow (15) map-reduce (2) mesos (1) molecular dynamics (4) mpi (1) multicore (1) osg (1) parallel (1) parrot (9) physics (4) processes (2) protomol (1) prune (1) release (2) replica exchange (1) reproducibility (2) resource monitor (1) reu (1) roars (1) shepherd (1) storage (1) threads (2) troubleshooting (3) turtles (1) umbrella (1) virtual machines (2) visualizing (3) wavefront (2) weaver (2) web browsers (1) work queue (15) workflow (1) workqueue (4) workshop (1)

Blog Archive

  • ►  2025 (2)
    • ►  January (2)
  • ►  2024 (10)
    • ►  December (2)
    • ►  November (2)
    • ►  October (3)
    • ►  March (2)
    • ►  February (1)
  • ►  2023 (11)
    • ►  November (3)
    • ►  October (1)
    • ►  July (2)
    • ►  April (2)
    • ►  February (3)
  • ►  2022 (11)
    • ►  October (1)
    • ►  September (1)
    • ►  August (5)
    • ►  July (1)
    • ►  June (1)
    • ►  February (2)
  • ►  2021 (12)
    • ►  December (2)
    • ►  November (3)
    • ►  August (3)
    • ►  July (1)
    • ►  April (2)
    • ►  February (1)
  • ▼  2020 (19)
    • ▼  December (4)
      • CCTools version 7.1.12 released
      • OpenTopography + EEMT + Makeflow
      • Analyzing Agriculture with Work Queue
      • Now Recruiting Students
    • ►  October (1)
      • CCTools version 7.1.9 released
    • ►  September (1)
      • Autoscaling HTC at CLUSTER 2020
    • ►  August (4)
      • CCTools version 7.1.7 released
      • Resource usage histograms for Work Queue using pyt...
      • Tim Shaffer Awarded DOE Fellowship
      • WRENCH Simulation of Work Queue
    • ►  July (3)
      • Coffea + Work Queue Presentation at PyHEP 2020
      • Troubleshooting at PEARC 2020
      • Container Management at IPDPS 2020
    • ►  June (1)
      • CCTools version 7.1.6 released
    • ►  May (2)
      • REU Project: Coffea + Work Queue
      • CCTools version 7.1.5 released
    • ►  April (1)
      • CCTools version 7.1.2 released
    • ►  March (1)
      • CCTools 7.1.0 released
    • ►  January (1)
      • Announcement: CCTools version 7.0.22 released
  • ►  2019 (11)
    • ►  November (2)
    • ►  October (2)
    • ►  September (1)
    • ►  August (2)
    • ►  July (1)
    • ►  June (1)
    • ►  March (1)
    • ►  February (1)
  • ►  2018 (16)
    • ►  November (2)
    • ►  October (1)
    • ►  August (2)
    • ►  July (4)
    • ►  June (2)
    • ►  May (3)
    • ►  April (1)
    • ►  March (1)
  • ►  2017 (17)
    • ►  December (2)
    • ►  November (1)
    • ►  October (3)
    • ►  August (2)
    • ►  June (1)
    • ►  May (4)
    • ►  March (2)
    • ►  February (2)
  • ►  2016 (21)
    • ►  October (2)
    • ►  September (2)
    • ►  August (3)
    • ►  July (1)
    • ►  June (1)
    • ►  May (6)
    • ►  April (1)
    • ►  March (2)
    • ►  February (2)
    • ►  January (1)
  • ►  2015 (24)
    • ►  November (4)
    • ►  October (3)
    • ►  September (2)
    • ►  August (3)
    • ►  July (4)
    • ►  June (1)
    • ►  May (4)
    • ►  April (1)
    • ►  March (2)
  • ►  2014 (12)
    • ►  December (2)
    • ►  November (1)
    • ►  September (1)
    • ►  August (2)
    • ►  July (1)
    • ►  June (3)
    • ►  May (1)
    • ►  February (1)
  • ►  2013 (15)
    • ►  November (1)
    • ►  October (1)
    • ►  August (1)
    • ►  July (3)
    • ►  June (1)
    • ►  May (1)
    • ►  March (3)
    • ►  February (2)
    • ►  January (2)
  • ►  2012 (15)
    • ►  November (2)
    • ►  October (3)
    • ►  September (3)
    • ►  August (1)
    • ►  July (3)
    • ►  June (1)
    • ►  February (2)
  • ►  2011 (1)
    • ►  August (1)
  • ►  2010 (8)
    • ►  November (3)
    • ►  October (2)
    • ►  April (1)
    • ►  January (2)
  • ►  2009 (12)
    • ►  October (2)
    • ►  August (1)
    • ►  July (2)
    • ►  June (1)
    • ►  May (1)
    • ►  April (1)
    • ►  February (3)
    • ►  January (1)
  • ►  2008 (9)
    • ►  December (3)
    • ►  November (2)
    • ►  October (4)
Simple theme. Powered by Blogger.