[Storage-research-list] Fwd: FW: Draft call for (more) nominations -- comments soliticied

John Bent johnbent at lanl.gov
Wed Apr 20 12:32:18 EDT 2011


All,

Two things:

1) You have all been subscribed to hec-fsio at lanl.gov.  I hope that's OK.
If you don't want to be or if you'd prefer a different email address to
be subscribed, if it's easy to figure out how to do, pls interact with
majordomo to make the change yourself.  :)  If it's a pain, pls let me
know and I'll make any desired changes.

2) Pls see the forwarded message below about the very exciting new NSF
PRObE testbed: about 2200 nodes split into three clusters available for
research.  Submit a proposal, get accepted, and gain access to very large
resources.  One unique aspect of this testbed is that the committee is
seeking cool research proposals that actually will __intentionally break
hardware__ in order to study failure models and resilience.  [Some of
you are multiple mailing lists to which this message was sent.
Apologies for redundancies in your inbox.] 

John

From: Garth Gibson [mailto:garth at cs.cmu.edu]
Sent: Monday, April 18, 2011 9:56 AM
To: The PRObE Management Group
Subject: Re: Draft call for (more) nominations -- comments soliticied

CALL FOR NOMINATIONS:

Steering and Project Selection Committee for NSF PRObE testbed (http://newmexicoconsortium.org/probe)

The NSF-funded Parallel Reconfigurable Observational Environment (PRObE) facility will be making thousands of computers (newmexicoconsortium.org/probe/machines<http://newmexicoconsortium.org/probe/machines>) available to systems researchers for dedicated use in experiments that are not possible or compelling at a smaller scale.

At full production scale PRObE will provide at least two 1024 node clusters, one of 200 nodes, and some smaller machines with extreme core count and bleeding edge technology.  The first of these clusters are being constructed now and will be available this year.  The large clusters are retired equipment donated by DOE national laboratories.

Researchers will have complete control of the hardware (can replace all levels of software, including the OS kernel, and inject both hardware and software failures) while they are running experiments  with dedicated resources for days or perhaps weeks.

The PRObE remote access environment will be based on the Emulab testbed-management software developed by the University of Utah (www.emulab.net<http://www.emulab.net>).

Probe is targeted at the needs of systems researchers in three communities:
- high-end or high performance computing, often publishing in the Supercomputing conference (SC),
- data-intensive scalable computing, often publishing in the Operating System Design and Implementation conference (OSDI), and
- data and storage systems for both, often publishing in the File and Storage Technologies conference (FAST).

We seek nominations for the steering and project selection committee advising PRObE leadership on policies and proposals for allocation of PRObE facilities to research projects.  We seek leading systems researchers with experience and continuing interest in large scale research experiments and publishing.  In addition to reviewing and advising PRObE leadership on policies and strategic decisions, this committee will allocate resources to proposed projects it deems most compelling.

Please nominate, on or before May 8, 2011, qualified candidates using the web form:
http://newmexicoconsortium.org/probe/committee-nominations

Nominated candidates will be elected using a web voting mechanism on the same web site between May 9 and May 22, 2011.

PRObE is a collaborative effort by the New Mexico Consortium (NMC), Los Alamos National Laboratory, Carnegie Mellon University, and the University of Utah. It is housed at NMC in the Los Alamos Research Park.

PRObE's leadership team includes:
- Gary Grider, Los Alamos National Laboratory
- Andree Jacobson, New Mexico Consortium
- Katharine Chartrand, New Mexico Consortium
- Garth Gibson, Carnegie Mellon University and Panasas Inc.
- Robert Ricci, University of Utah
-- 
Thanks,

John 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/storage-research-list/attachments/20110420/fada7312/attachment.html>


More information about the Storage-research-list mailing list