<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
[Apologies if you got multiple copies of this email.]<br>
<br>
================================================= <br>
CALL FOR PAPERS<br>
ESSA 2023: 4th International Workshop on Extreme-Scale
Storage and Analysis<br>
(Formerly HPS: International Workshop on High
Performance Storage)<br>
<br>
Held in conjunction with IPDPS 2023 - May, 2023, St.
Petersburg, Florida USA<br>
<br>
Submission website: <a class="moz-txt-link-freetext"
href="https://ssl.linklings.net/conferences/ipdps/">https://ssl.linklings.net/conferences/ipdps/</a>
<br>
Submission deadline: January 21, 2023<br>
<br>
Workshop website: <a class="moz-txt-link-freetext"
href="https://sites.google.com/view/essa-2023/">https://sites.google.com/view/essa-2023/</a><br>
=================================================<br>
<br>
=== *Overview* === <br>
<br>
Advances in storage are becoming increasingly critical because
workloads on high performance computing (HPC) and cloud systems are
producing and consuming more data than ever before, and the
situation promises to only increase in future years. Additionally,
the last decades have seen relatively few changes in the structure
of parallel file systems, and limited interaction between the
evolution of parallel file systems and I/O support systems that take
advantage of hierarchical storage layers. However, recently the
community has seen a large uptick in innovations in data storage and
processing systems as well as in I/O support software for several
reasons:<br>
<br>
* Technology: The availability of an increasing number of
persistent solid-state storage and persistent storage-class memory
technologies that can replace either memory or disk are creating new
opportunities for the structure of storage systems.<br>
<br>
* Performance requirements: Disk-based parallel file systems
cannot satisfy the performance needs of high-end systems. However,
it is not clear how solid-state storage and storage-class memory can
best be used to achieve the needed performance, so new approaches
for using solid-state storage and storage-class memory in HPC
systems are being designed and evaluated.<br>
<br>
* Application evolution: Data analysis applications, including
graph analytics and machine learning, are becoming increasingly
important both for scientific computing and for commercial
computing. I/O is often a major bottleneck for such applications,
both in cloud and HPC environments – especially when fast turnaround
or integration of heavy computation and analysis are required.
Consequently, data storage, I/O and processing requirements are
evolving, as complex workflows involving computation, analytics and
learning emerge.<br>
<br>
* Infrastructure evolution: HPC technology will not only be
deployed in dedicated supercomputing centers in the future.
"Embedded HPC", "HPC in the box", "HPC in the loop", "HPC in the
cloud", "HPC as a service", and "near-to-real-time simulation" are
concepts requiring new small-scale deployment environments for HPC.
A federation of systems and functions with consistent mechanisms for
managing I/O, storage, and data processing across all participating
systems will be required to create what is called a "computing
continuum".<br>
<br>
* Virtualization and disaggregation: As virtualization and
disaggregation become broadly used in cloud and HPC computing, the
issue of virtualized storage has increasing importance and efforts
will be needed to understand its implications for performance. <br>
<br>
Our goals in the ESSA Workshop are to bring together expert
researchers and developers in data-related areas such as storage,
I/O, processing and analytics on extreme scale infrastructures
including HPC systems, clouds, edge systems or hybrid combinations
of these, to discuss advances and possible solutions to the new
challenges we face. We expect the ESSA Workshop to result in lively
interactions over a wide range of interesting topics, including:<br>
<ul>
<li>Extreme-scale storage systems (on high-end HPC
infrastructures, clouds, or hybrid combinations of them)</li>
<li>Extreme-scale parallel and distributed storage architectures </li>
<li>The synergy between different storage models (POSIX file
system, object storage, key-value, row-oriented, and
column-oriented databases) </li>
<li>Structures and interfaces for leveraging persistent
solid-state storage and storage-class memory </li>
<li>High-performance I/O libraries and services </li>
<li>I/O performance in extreme-scale systems and applications
(HPC/clouds/edge) </li>
<li>Storage and data processing architectures and systems for
hybrid HPC/cloud/edge infrastructures, in support of complex
workflows potentially combining simulation and analytics </li>
<li>Integrating computation into the memory and storage hierarchy
to facilitate in-situ and in-transit data processing </li>
<li>I/O characterization and data processing techniques for
application workloads relying on extreme-scale
parallel/distributed machine-learning/deep learning </li>
<li>Tools and techniques for managing data movement among compute
and data intensive components </li>
<li>Data reduction and compression </li>
<li>Failure and recovery of extreme-scale storage systems </li>
<li>Benchmarks and performance tools for extreme-scale I/O </li>
<li>Language and library support for data-centric computing </li>
<li>Storage virtualization and disaggregation </li>
<li>Ephemeral storage media and consistency optimizations </li>
<li>Storage architectures and systems for scalable stream-based
processing </li>
<li>Study cases of I/O services and data processing architectures
in support of various application domains (bioinformatics,
scientific simulations, large observatories, experimental
facilities, etc.) </li>
</ul>
=== *Submission Guidelines* === <br>
<br>
The workshop will accept traditional research papers (8-10 pages)
for in-depth topics and short papers (4-8 pages) for work in
progress on hot topics. Papers should present original research and
provide sufficient background material to make them accessible to
the broader community.<br>
<br>
Paper format: single-spaced double-column pages using 10-point size
font on 8.5x11 inch pages (IEEE conference style), including
figures, tables, and references. The submitted manuscripts should
include author names and affiliations. The IEEE conference style
templates for MS Word and LaTeX provided by IEEE eXpress Conference
Publishing are available here: <a class="moz-txt-link-freetext"
href="https://www.ieee.org/conferences/publishing/templates.html">https://www.ieee.org/conferences/publishing/templates.html</a>
<br>
<br>
Submission site: <a class="moz-txt-link-freetext"
href="https://ssl.linklings.net/conferences/ipdps/">https://ssl.linklings.net/conferences/ipdps/</a>
<br>
<br>
=== *Important Dates* === <br>
<ul>
<li>Abstract submission (optional) deadline: January 14, 2023</li>
<li>Paper submission deadline: January 21, 2023</li>
<li>Acceptance notification: February 21, 2023</li>
<li>Camera-ready deadline: February 28, 2023</li>
<li>Workshop date: May 15, 2023</li>
</ul>
=== *Organization* === <br>
<br>
Workshop Chairs<br>
<br>
Kento Sato, RIKEN, Japan - Chair - <a
class="moz-txt-link-abbreviated moz-txt-link-freetext"
href="mailto:kento.sato@riken.jp">kento.sato@riken.jp</a><br>
Gabriel Antoniu, Inria, France - Co-Chair - <a
class="moz-txt-link-abbreviated moz-txt-link-freetext"
href="mailto:gabriel.antoniu@inria.fr">gabriel.antoniu@inria.fr</a>
<br>
<br>
Program Chairs<br>
<br>
Weikuan Yu, Florida State University, USA - Chair - <a
class="moz-txt-link-abbreviated moz-txt-link-freetext"
href="mailto:yuw@cs.fsu.edu">yuw@cs.fsu.edu</a><br>
Sarah Neuwirth, Goethe University Frankfurt, Germany - Co-Chair -
<a class="moz-txt-link-abbreviated moz-txt-link-freetext"
href="mailto:s.neuwirth@em.uni-frankfurt.de">s.neuwirth@em.uni-frankfurt.de</a><br>
<br>
Web & Publicity Chair<br>
<br>
François Tessier, Inria, France - Chair - <a
class="moz-txt-link-abbreviated moz-txt-link-freetext"
href="mailto:francois.tessier@inria.fr">francois.tessier@inria.fr</a><br>
<br>
Program Committee<br>
<br>
Gabriel Antoniu, French Insitute for Research in Computer Science
and Automation (INRIA), France<br>
Jean Luca Bez, Lawrence Berkeley National Laboratory (LBNL), USA <br>
Suren Byna, Lawrence Berkeley National Laboratory (LBNL), USA<br>
Wei Der Chien, University of Edinburgh, Great Britain<br>
Alexandru Costan, French Insitute for Research in Computer Science
and Automation (INRIA), France<br>
Hariharan Devarajan, Lawrence Livermore National Laboratory
(LLNL), Illinois Institute of Technology, USA<br>
Matthieu Dorier, Argonne National Laboratory (ANL), USA<br>
Hideyuki Kawashima, Keio University, Tokio, Japan<br>
Younjae Kim, Sogang University, South Korea<br>
Christos Kozanitis, Foundation for Research and Technology -
Hellas (FORTH), Greece<br>
Jay Lofstead, Sandia National Laboratories, USA<br>
Ricardo Macedo, INESC TEC, University of Minho, Portugal<br>
Sarah Neuwirth, Goethe University Frankfurt, Germany<br>
Hiroki Ohtsuji, Fujitsu Ltd, Japan<br>
Arnab K. Paul, PITS Pilani, K.K. Goa Campus, India<br>
Dana Petcu, West University of Timisoara, Romania<br>
Michael Schoettner, Duesseldorf University, Germany<br>
Chen Wang, Lawrence Livermore National Laboratory (LLNL), USA<br>
<br>
For additional details, see web site: <a
class="moz-txt-link-freetext"
href="https://sites.google.com/view/essa-2023/">https://sites.google.com/view/essa-2023/</a>
<br>
</body>
</html>